Backoff DOP: Parameter Estimation by Backoff

Size: px
Start display at page:

Download "Backoff DOP: Parameter Estimation by Backoff"

Transcription

1 Backoff DOP: Parameter Estimation by Backoff Luciano Buratto and Khalil ima an Institute for Logic, Language and Computation (ILLC) University of Amsterdam, Amsterdam, The Netherlands Abstract. The Data Oriented Parsing (DOP) model currently achieves state-ofthe-art parsing on benchmark corpora. However, existing DOP parameter estimation methods are known to be biased, and ad hoc adjustments are needed in order to reduce the effects of these biases on performance. This paper presents a novel estimation procedure that exploits a unique property of DOP: different derivations can generate the same parse-tree. We show that the different derivations represent different Markov orders that the DOP model interpolates together. The idea behind the present method is to combine the different derivation orders by backoff instead of interpolation. This allows for a novel estimation procedure that employs Katz backoff for estimation. We report on experiments showing error reduction of up to 15% with respect to earlier methods. 1 The DOP model The Data Oriented Parsing (DOP) model currently exhibits state-of-the-art performance on benchmark corpora [1]. Like other treebank models, DOP extracts a finite set of rewrite productions, called subtrees, from the training treebank together with probabilities. A connected subgraph of a treebank tree Ø is called a subtree iff it consists of one or more Context-Free productions 1 from Ø. Following [2], the set of rewrite productions of DOP consists of all the subtrees of the treebank trees. Figure 3 exemplifies the set of subtrees extracted from the treebank of Figure 1. The DOP model employs the set of subtrees as a tochastic Tree-ubstitution Grammar (TG): an TG is a rewrite system similar to Context-Free Grammars (CFGs), 1 Note that a non-leaf node labeled Ô in tree Ø dominating a sequence of nodes labeled ½ Ò consists of a graph that represents the Context-Free production: Ô ½ Ò. John Æ Æ Æ John Æ John = Æ John = John Fig. 1. A toy treebank Fig. 2. Two different derivations of the same parse

2 (s1) (s2) (s3) (s4) (s5) (s6) (s7) (s8) (s9) (s10) John John John (s11) (s12) (s13) (s14) (s15) (s16) (s17) John John John Fig. 3. The subtrees of the treebank in Figure 1 with the only difference that the productions of a TG are subtrees of arbitrary depth 2. A TG derivation proceeds by combining subtrees using the substitution operation Æ starting from the start symbol Ë of the TG. In contrast with CFG derivations, multiple TG derivations may generate the same parse. For example, the parse in Figure 1 can be derived at least in two different ways as shown in Figure 2. In this sense, the DOP model deviates from other contemporary models, e.g. [3, 4], that belong to the so-called History-Based tochastic Grammar (HBG) family [5]. The latter models generate every parse-tree through a unique stochastic derivation. An tochastic TG (TG) is a TG extended with a probability mass function È over the set of subtrees: the probability of subtree Ø, that has root label Ê Ø, is given by È Ø Ê Ø µ, i.e. for every non-terminal : È Ø Ê Ø È Ø µ ½. Given a probability function È, the probability of a derivation Ë ÆØ ½ Æ ÆØ Ò is defined by È Ëµ É Ò È Ø ½ Ê Ø µ. The probability of a parse is defined by the sum of the probabilities of all derivations in the TG that generate that parse. As a down side to the multiple derivations per parse property, the algorithms for selecting the parse with the highest probability are known to be intractable [6]. In this paper we address another difficulty that arises from the property of the multiple derivations per parse with regard to the DOP model: how to estimate the model parameters (subtree probabilities) from a treebank? 2 Existing DOP estimators The problem of how to estimate the probabilities of the subtrees from a treebank is not as straightforward as originally thought. o far, there exist three estimation procedures [2, 7, 1]. As shown in [7 9], all three estimation procedures turn out to be biased in an unintuitive manner. Next we provide an overview of the existing methods. (1) Relative frequency Wrong bias The first instantiation of a DOP model is due to [2] and is referred to as ÇÈ Ö. In this model, the probability estimates of subtrees 2 The depth of a tree is the number of edges along the longest path from the root to a leaf node.

3 extracted from a treebank are given by a relative frequency estimator. Let ص represent the number of times Ø occurred in the bag of subtrees extracted from the treebank. Then ص the probability of Ø in ÇÈ Ö is estimated as: È Ø Ê Ø µ ÈØ. ¼ ÊØ ¼ Ê Ø Ø¼ µ ÇÈ Ö s performance on benchmark corpora has been promising, e.g. 89.7% labelled recall and precision on the Wall treet Journal treebank [1]. Despite good performance, ÇÈ Ö estimator has been shown to be biased and inconsistent, meaning that the sequence of probability distributions obtained with the estimator does not converge to the distribution that generated the data [8]. Furthermore, it has been shown that ÇÈ Ö s good performance can be attributed to formal constraints on the set of subtrees extracted from the treebank, e.g. an upperbound on subtree depth and another on the number of leaf nodes labeled with a non-terminal symbol in a subtree. These constraints limit ÇÈ Ö s bias, leading to improved performance ([7]). (2) Bonnema s estimator The estimation procedure introduced in [7] assumes that every treebank parse stands for a uniform distribution over all possible derivations that generate that parse in the DOP model. Hence, prior to parameter estimation, every treebank parse is expanded to all derivations that generate it in the DOP model. ubsequently, counting subtree frequency proceeds by decomposing every derivation into the subtrees that participate in it leading to the following estimate: È Ø Ê Ø µ ¾ Æ Øµ Ø Ê Ø µ, where Æ Øµ is the number of non-root nodes of subtree Ø and is the original ÇÈ Ö s relative frequency estimator. This can be interpreted as a ÇÈ Ö estimator with a correction factor ¾ Æ Øµ for subtree size. Large subtrees have their estimate downgraded, and small subtrees are (relatively) upgraded. The estimator defines a new DOP model which we refer to as ÇÈ ÓÒ model. As shown in [9], the Bonnema procedure is also biased. Briefly stated, while the original relative frequency estimator ÇÈ Ö gives too much probability mass to large subtrees, ÇÈ ÓÒ gives an excessive amount of probability mass to small subtrees. (3) Maximum-likelihood Overfitting One might say that ÇÈ Ö estimator is biased because it is not a Maximum-Likelihood (ML) estimator. This is in fact the approach taken in [1], where the Inside-Outside algorithm is used for estimation of the DOP model parameters from a treebank under the assumption that the model has a hidden element (the derivations that generated the parses of the treebank). However, as [7] note, ML for DOP always results in a model that overfits the treebank. Let be given a treebank with trees ½ ¾, both having the same root label. The ML probability assignment to the subtrees extracted from this treebank is given by È ½ Ê ½ µ È ¾ Ê ¾ µ ½ ¾ and zero to all other subtrees. This parameter assignment is ML because it gives the treebank trees the largest possible likelihood of exactly 1. However, this probability assignment yields a DOP model that overfits the treebank, such that probability zero is given to all parses not present in the treebank. 3 A new estimator for DOP In this section, we develop a completely different approach to parameter estimation for DOP than earlier work. Consider the common situation where a subtree 3 Ø is equal to 3 The term subtree is reserved for the tree-structures that DOP extracts from the treebank.

4 a tree generated by a derivation Ø ½ Æ Æ Ø Ò involving multiple subtrees Ø ½ Ø Ò. For example, subtree s17 (Figure 3) can be constructed by different derivations such as (s16 Æ s2), (s14 Æ s1) and (s15 Æ s1 Æ s3). We will refer to subtrees that can be constructed from derivations involving other subtrees with the term complex subtrees. For every complex subtree Ø, we restrict 4 our attention only to the derivations involving pairs of subtrees; in other words, we focus on subtree Ø such that there exist subtrees Ø ½ and Ø ¾ such that Ø Ø ½ Æ Ø ¾ µ. In DOP, the probability of Ø is given by È Ø Ê Ø µ. In contrast, the derivation probability is given by È Ø ½ Ê Ø½ µè Ø ¾ Ê Ø¾ µ. However, according to the chain rule È Ø ½ Æ Ø ¾ Ê Ø½ µ È Ø ½ Ê Ø½ µè Ø ¾ Ø ½ µ. Therefore, the derivation Ø ½ ÆØ ¾ embodies an independence assumption realized by the approximation 5 : È Ø ¾ Ø ½ µ È Ø ¾ Ê Ø¾ µ. This approximation involves a so-called backoff, i.e. a weakening of the conditioning context from È Ø ¾ Ø ½ µ to È Ø ¾ Ê Ø¾ µ. Hence, we will say that the derivation Ø ½ Æ Ø ¾ constitutes a backoff of subtree Ø and we will write (Ø Ø ½ Æ Ø ¾ ) to express this fact. The backoff relation between a subtree and a pair of other subtrees allows for a partial order between the derivations of the subtrees extracted from a treebank. A graphical representation of this partial order is a directed acyclic graph which consists of a node for each pair of subtrees Ø Ø that constitute a derivation of another complex subtree. A directed edge points from a subtree Ø in a node 6 to another node containing a pair of subtrees Ø Ø iff Ø Ø ÆØ. We refer to this graph as the backoff graph. A portion of the backoff graph for the subtrees of Figure 3 would look as follows (where ¼ stands for a subtree consisting of a single node labeled Ë the start symbol): s9 s1 s4 s3 s8 s2 s6 s7 s0 s17 s11 s3 s13 s3 s4 s2.. s16. s2. s14. s1. We distinguish two sets of subtrees: initial and atomic. Initial subtrees are subtrees that do not participate in a backoff derivation of any other subtree. In Figure 3, subtree ½ is the only initial subtree.atomic subtrees are subtrees for which there are no backoffs. In Figure 3, these are subtrees of depth one (double circled in the backoff graph). In the DOP model (under any estimation procedure discussed in section 2), the probablity of a parse-tree is defined as the sum of the probabilities of all derivations that generate this parse-tree. This means that DOP linearly interpolates derivations involving subtrees from different levels of the backoff graph; this is similar to the way 4 Because DOP takes all subtrees of the treebank, if complex subtree Ø has a derivation ؽ Æ Ø¾ Æ Æ Ø Ò, then the tree resulting from ؽ Æ Ø¾ is a complex subtree also. For example, in Figure 3, ½ can be derived through (s15 Æ s1 Æ s3); ½ Æ ½ generates subtree ½. Hence, derivations of Ø that involve more than two subtrees can be separarted into (sub)derivations that involve pairs of subtrees, each leading to a complex subtree. Therefore, for any complex subtree Ø, we may restrict our attention to derivations involving only pairs of subtrees i.e., Ø Ø½ Æ Ø¾. 5 Note that Ê Ø¾ is part of ؽ (the label of the substitution site). 6 In a pair Ø Ø or Ø Ø that constitutes a node.

5 Hidden Markov Models interpolate different Markov orders over, e.g. words, for calculating sentence probability. Hence, we will refer to the different levels of subtrees in the backoff graph as the Markov orders. Backoff DOP Crucially, the partial order over the subtrees, embodied in the backoff graph, can be exploited for turning DOP into a backedoff model as follows. A subtree is generated by a sequence of derivations ordered by the backoff relation. This is in sharp contrast with existing DOP models that consider the different derivations leading to the same subtree as a set of disjoint events. Next we present the estimation procedure that accompanies this new realization of DOP as a recursive backoff over the different Markov orders. Estimation vs. smoothing It is common in probabilistic modeling to smooth a probability distribution È Ø µ by a backoff distribution thereof e.g. È Ø µ. The smoothing of È Ø µ aims at dealing with the problem of sparse-data (whenever the probability È Ø µ is zero). The backoff value È Ø µ can be used as an approximation of È Ø µ under the assumption that Ø and are independent. moothing, then, aims at enlarging the space of non-zero events in the distribution È Ø µ. Hence, the goal of smoothing differs from our goal. While smoothing aims at filling the zero gaps in a distribution, our goal is to estimate the distribution (a priori to smoothing it). Despite these differences, we employ a backoff method for parameter estimation by redistributing probability mass among DOP model subtrees. Katz Backoff The Katz Backoff method [10, 11] is a smoothing technique based on the discounting method of Good-Turing (GT) [12, 11]. Given a higher order distribution È Ø µ, Katz backoff employs the GT formula for discounting from this distribution È leading to È Ì Ø µ. Then, the probability mass that was discounted (½ È Ø Ì Ø µ) is distributed over the lower order distribution È Ø µ. Backoff DOP We assume initial probability estimates È based on frequency counts, e.g. as in ÇÈ Ö or ÇÈ ÓÒ. The present backoff estimation procedure operates topdown, stepwisely, over the backoff graph, starting with the initial and moving down to the atomic subtrees. In essense this procedure transfers, stepwisely, probability mass from complex subtrees to their backoffs. Let È represent the current probability estimate resulting from the previous step (initialy È È ). For every (Ø Ø ½ Æ Ø ¾ ) in the backoff graph, we know that (1) È Ø Ê Ø µ È Ø ½ Ê Ø½ µè Ø ¾ Ø ½ µ and (2) È Ø ½ Æ Ø ¾ µ È Ø ½ Ê Ø½ µè Ø ¾ Ê Ø¾ µ (note that Ê Ø Ê Ø½ ). This mean that È Ø ¾ Ø ½ µ is backedoff to È Ø ¾ Ê Ø¾ µ. Hence, we may adapt the Katz method to estimate the Backoff DOP probability È Ó as follows: È È Ó Ø ¾ Ø Ø Ì ¾ Ø ½ µ «Ø ½ µ È Ø ¾ Ê Ø¾ µ ½ µ «Ø ½ µ È Ø ¾ Ê Ø¾ µ È Ø ¾ Ø ½ µ ¼ ÓØ ÖÛ where «Ø ½ µ is a normalization factor that guarantees that the È sum of the probabilities of subtrees with the same root label is one: «Ø ½ µ ½ È Ø Ø ¾ Ø ½ Ø ¾µ ¼ Ì ¾ Ø ½ µ. Using the above estimate of È Ó Ø ¾ Ø ½ µ, the backoff estimates are calculated as follows: È Ó Ø Ê Ø µ È Ø ½ Ê Ø½ µ È Ø Ì ¾ Ø ½ µ, and È Ó Ø ½ Ê Ø½ µ ½ «Ø ½ µ µè Ø ½ Ê Ø½ µ. Then the current probabilities are updated before the next step of Katz backoff takes place over the next layer in the backoff graph as follows: È Ø ½ Ê Ø½ µ È Ó Ø ½ Ê Ø½ µ.

6 È Note that È Ó is a proper distribution in the sense that for all nonterminals : Ø È Ø µ ½. This is guaranteed by the redistribution of the reserved probability mass at every step of the procedure over the layers of the backoff graph. Furthermore, we note that the present method is not a smoothing method since it applies Katz Backoff for redistributing probability mass only among subtrees that did occur in the treebank. The present method does not address probability estimation for unknown/unseen events. Current implementation The number of subtrees extracted from a tree-bank is extremely large. In this paper, we choose to apply the Katz backoff only to Ø Ø ½ Æ Ø ¾ iff Ø ¾ is a lexical subtree i.e., Ø ¾ Û where is a Part of peech (Po) tag and Û a word. Our choice has to do with the importance of lexicalized subtrees and the overestimation that accompanies their relative frequency. All experiments reported here pertain to applying the backoff estimation procedure to this limited set of subtrees (while the probabilities of all other subtrees are left untouched). 4 Empirical results OI corpus and evaluation metrics OI is a Dutch, speech-based, dialogue system that provides railway time-table information to human users over ordinary telephone lines. The OI corpus contains 10,049 syntactically and semantically annotated utterances which are answers given by users to the system s questions (e.g. From where to where do you want to travel? ). The answers are used to fill in a number of slots that are typical of travel information, such as origin, destination and time. The semantic content of the utterances, expressed in an update language [13], is used to update the system s information state. The OI treebank utterances are annotated by a phrase-structure scheme with syntactic-semantic labels. The corpus was randomly split into two sets: i) a training set with 9,049 trees; ii) a test set with 1,000 trees. The experiments were carried out using the same train/test split. We report results for sentences that are at least two-word long (as 1 word setences are easy). Without 1-word sentences, the average sentence length is 4.6 words/sentence. Three accuracy measures were employed: exact match and recall/precision (F-score) of labeled bracketing [14], to assess individual model performance, and error reduction ratio, the ratio between the percent point improvement of model 1 over model 2 normalized by the global error of model 2, to evaluate inter-model performance. ubtree space was reduced by means of three upper bounds on their shape: 1) depth (d), 2) number of lexical items (l) and 3) number of substitution sites (n). Most Probable Derivation (MPD) was used as the maximization entity to select the preferred parse. Models We tested the new estimator under two different counting strategies: ÇÈ Ö and ÇÈ ÓÒ. The following naming convention was used: ÇÈ Ö (as in [2]), ÇÈ ÓÒ (as in [7]), BO- ÇÈ Ö (backoff estimator applied to ÇÈ Ö frequencies), and BO- ÇÈ ÓÒ (backoff estimator applied to ÇÈ ÓÒ frequencies). Accuracy vs. depth upper bound Figure 4 shows exact match results as a function of subtree depth upper bound. The subtrees were restricted to at most 2 words and 4 substitution nodes (l2n4). This yields small subtree spaces: for depth 7, this corresponds to 172,050 subtrees. For all depth upper bounds, BO- ÇÈ Ö achieved the best results

7 % Exact Match BO-DOP rf DOP rf DOP Bon BO-DOP Bon ubtree depth upper bound Discounted probability mass Total (N1/N) 2-word subtrees 1-word subtrees ubtree depth upper bound Fig. 4. Exact match as a function of subtree depth upper bound (l2n4,mpd). Fig. 5. Probability mass discounted from subtrees as a function of depth upper bound (l2n4). followed by ÇÈ Ö, ÇÈ ÓÒ and BO- ÇÈ ÓÒ. BO- ÇÈ Ö improved on ÇÈ Ö by 1.72 percent points at depth 6, an increase of 2.02%. This corresponds to an error reduction of 11.4%. When compared to ÇÈ ÓÒ, error reduction rose to about 15%. F-score results followed the same pattern. At depth 6, BO- ÇÈ Ö reached 95.33%; ÇÈ Ö, 94.73%; ÇÈ ÓÒ, 94.5% and BO- ÇÈ ÓÒ, 94.33%. Error reduction of BO- ÇÈ Ö with respect to ÇÈ Ö reached 11.3% and with respect to ÇÈ ÓÒ, 15.7%. Probability mass transfer Figure 5 shows discounted probability mass as a function of subtree depth upper bound. The probability mass discounted from 2-word subtrees is bigger than the mass discounted from 1-word subtrees 7. This happens because the number of hapax legomena (subtrees that occur just once) tends to increase for higher d and l upper bounds, since larger subtrees with rare word combinations are allowed into the distribution. The more hapax legomena the higher the discounting rates will be (according to the Good-Turing method). Thus, the probability mass discounted from Ò-word subtrees is, in general, bigger than the mass discounted from (Ò-1)-subtrees. Consequently, the magnitude of the probability transfer across Markov orders gradually decreases as the recursive estimation procedure approaches atomic subtrees. The property of decreasing discounts avoids the pitfall of overestimating small subtrees (c.f. ÇÈ ÓÒ ) and reduces the overestimation of large subtrees (c.f. ÇÈ Ö ). The performance pattern exhibited above remained steady across four different training/test splits. The average over the four splits shows that BO- ÇÈ Ö reduces error compared to ÇÈ Ö and ÇÈ ÓÒ by % Conclusions This paper presented a new estimator for the DOP model which uses the Katz method for redistributing frequency-based probability mass among the subtrees of the model. 7 Ò-word subtrees are subtrees having exactly Ò words in their leaf nodes.

8 We have seen empirical evidence for the improved performance of this estimator (over other existing estimators). An interesting side effect is that the problem of selecting the preferred parse can be achieved in deterministic polynomial-time, since this estimator treats the different derivations of the same parse as backoff alternatives. Future work will address (1) formal aspects of the new estimator (bias and inconsistency questions), (2) a Maximum-Likelihood variant for DOP that incorporates the observations discussed in this paper, and (3) further experiments. References 1. Bod, R.: What is the minimal set of fragments that achieves maximal parse accuracy? In: Proceedings of the 39th Annual Meeting of the ACL (ACL 2001). (2001) 2. Bod, R.: Enriching Linguistics with tatistics: Performance models of Natural Language. PhD dissertation. ILLC dissertation series , University of Amsterdam (1995) 3. Chelba, C., Jelinek, F.: Exploiting syntactic structure for language modeling. In Boitet, C., Whitelock, P., eds.: Proceedings of the Thirty-ixth Annual Meeting of the Association for Computational Linguistics and eventeenth International Conference on Computational Linguistics, an Francisco, California, Morgan Kaufmann Publishers (1998) Charniak, E.: A maximum entropy inspired parser. In: Proceedings of the 1st Meeting of the North American Chapter of the ACL (NAACL-00), eattle, Washington, UA (2000) Black, E., Jelinek, F., Lafferty, J., Magerman, D., Mercer, R., Roukos,.: Towards Historybased Grammars: Using Richer Models for Probabilistic Parsing. In: Proceedings of the 31st Annual Meeting of the ACL (ACL 93), Columbus, Ohio (1993) 6. ima an, K.: Computational complexity of probabilistic disambiguation. Grammars 5(2) (2002) Bonnema, R., Buying, P., cha, R.: A new probability model for data oriented parsing. In Dekker, P., ed.: Proceedings of the Twelfth Amsterdam Colloquium. ILLC/Department of Philosophy, University of Amsterdam, Amsterdam (1999) Johnson, M.: The DOP estimation method is biased and inconsistent. Computational Linguistics 28(1) (2002) Buratto, L.: Back-off as parameter estimation for DOP models. In de Jongh, D., ed.: Master of Logic eries (MoL ). ILLC cientific Publications, Institute for Logic, Language and Computation (ILLC), Amsterdam, The Netherlands (2002) 10. Katz,.: Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, peech and ignal Processing (AP) 35(3) (1987) Chen,., Goodman, J.: An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University (1998) 12. Good, I.: The population frequencies of species and the estimation of population parameters. Biometrika 40 (1953) eldhuijzen van Zanten, G.: emantics of update expressions. Technical report #24, Netherlands Organization for cientific Research (NWO), Priority Programme for peech and Language Technology, ØØÔ Ö Ð Ø ÖÙ ÒÐ ¾½ (1996) 14. Black et al., E.: A procedure for Quantitatively Comparing the yntactic Coverage of English Grammars. In: Proceedings of the February 1991 DARPA peech and Natural Language Workshop, an Mateo, CA., Morgan Kaufman (1991)

Solutions of Implication Constraints yield Type Inference for More General Algebraic Data Types

Solutions of Implication Constraints yield Type Inference for More General Algebraic Data Types Solutions of Implication Constraints yield Type Inference for More General Algebraic Data Types Peter J. Stuckey NICTA Victoria Laboratory Department of Computer Science and Software Engineering The University

More information

A Calculus for End-to-end Statistical Service Guarantees

A Calculus for End-to-end Statistical Service Guarantees A Calculus for End-to-end Statistical Service Guarantees Technical Report: University of Virginia, CS-2001-19 (2nd revised version) Almut Burchard Ý Jörg Liebeherr Stephen Patek Ý Department of Mathematics

More information

A constraint based dependancy parser for Sanskrit

A constraint based dependancy parser for Sanskrit A constraint based dependancy parser for Sanskrit Amba Kulkarni apksh@uohyd.ernet.in Department of Sanskrit Studies University of Hyderabad Hyderabad 19 Feb 2010 Calicut University Page 1 Æ Ó - Ý Ý Ñ ÚÝ

More information

Event Based Sequential Program Development: Application to Constructing a Pointer Program

Event Based Sequential Program Development: Application to Constructing a Pointer Program Event Based Sequential Program Development: Application to Constructing a Pointer Program Jean-Raymond Abrial Consultant, Marseille, France jr@abrial.org Abstract. In this article, I present an event approach

More information

Decomposition and Complexity of Hereditary History Preserving Bisimulation on BPP

Decomposition and Complexity of Hereditary History Preserving Bisimulation on BPP Decomposition and Complexity of Hereditary History Preserving Bisimulation on BPP Sibylle Fröschle and Sławomir Lasota Institute of Informatics, Warsaw University 02 097 Warszawa, Banacha 2, Poland sib,sl

More information

ECE250: Algorithms and Data Structures Trees

ECE250: Algorithms and Data Structures Trees ECE250: Algorithms and Data Structures Trees Ladan Tahvildari, PEng, SMIEEE Professor Software Technologies Applied Research (STAR) Group Dept. of Elect. & Comp. Eng. University of Waterloo Materials from

More information

Refinement in Requirements Specification and Analysis: a Case Study

Refinement in Requirements Specification and Analysis: a Case Study Refinement in Requirements Specification and Analysis: a Case Study Edwin de Jong Hollandse Signaalapparaten P.O. Box 42 7550 GD Hengelo The Netherlands edejong@signaal.nl Jaco van de Pol CWI P.O. Box

More information

Two-Way Equational Tree Automata for AC-like Theories: Decidability and Closure Properties

Two-Way Equational Tree Automata for AC-like Theories: Decidability and Closure Properties Two-Way Equational Tree Automata for AC-like Theories: Decidability and Closure Properties Kumar Neeraj Verma LSV/CNRS UMR 8643 & INRIA Futurs projet SECSI & ENS Cachan, France verma@lsv.ens-cachan.fr

More information

Deadlock. deadlock analysis - primitive processes, parallel composition, avoidance

Deadlock. deadlock analysis - primitive processes, parallel composition, avoidance Deadlock CDS News: Brainy IBM Chip Packs One Million Neuron Punch Overview: ideas, 4 four necessary and sufficient conditions deadlock analysis - primitive processes, parallel composition, avoidance the

More information

Infinite-Horizon Policy-Gradient Estimation

Infinite-Horizon Policy-Gradient Estimation Journal of Artificial Intelligence Research 15 (2001) 319-350 Submitted 9/00; published 11/01 Infinite-Horizon Policy-Gradient Estimation Jonathan Baxter WhizBang! Labs. 4616 Henry Street Pittsburgh, PA

More information

Extensional Equality in Intensional Type Theory

Extensional Equality in Intensional Type Theory Extensional Equality in Intensional Type Theory Thorsten Altenkirch Department of Informatics University of Munich Oettingenstr. 67, 80538 München, Germany, alti@informatik.uni-muenchen.de Abstract We

More information

Probabilistic Latent Semantic Analysis Hofmann (1999)

Probabilistic Latent Semantic Analysis Hofmann (1999) Probabilistic Latent Semantic Analysis Hofmann (1999) Presenter: Mercè Vintró Ricart February 8, 2016 Outline Background Topic models: What are they? Why do we use them? Latent Semantic Analysis (LSA)

More information

Subjectivity Classification

Subjectivity Classification Subjectivity Classification Wilson, Wiebe and Hoffmann: Recognizing contextual polarity in phrase-level sentiment analysis Wiltrud Kessler Institut für Maschinelle Sprachverarbeitung Universität Stuttgart

More information

P(x) testing training. x Hi

P(x) testing training. x Hi ÙÑÙÐ Ø Ú ÈÖÓ Ø ± Ê Ú Û Ó Ä ØÙÖ ½ Ç Ñ³ Ê ÞÓÖ Ì ÑÔÐ Ø ÑÓ Ð Ø Ø Ø Ø Ø Ð Ó Ø ÑÓ Ø ÔÐ Ù Ð º Ë ÑÔÐ Ò P(x) testing training Ø ÒÓÓÔ Ò x ÓÑÔÐ Ü ØÝ Ó h ÓÑÔÐ Ü ØÝ Ó H ¼ ¾¼ ½¼ ¼ ¹½¼ ÒÓÓÔ Ò ÒÓ ÒÓÓÔ Ò ÙÒÐ ÐÝ Ú ÒØ Ò

More information

Improved Boosting Algorithms Using Confidence-rated Predictions

Improved Boosting Algorithms Using Confidence-rated Predictions Improved Boosting Algorithms Using Confidence-rated Predictions ÊÇÊÌ º ËÀÈÁÊ schapire@research.att.com AT&T Labs, Shannon Laboratory, 18 Park Avenue, Room A279, Florham Park, NJ 7932-971 ÇÊÅ ËÁÆÊ singer@research.att.com

More information

38050 Povo (Trento), Italy Tel.: Fax: e mail: url:

38050 Povo (Trento), Italy Tel.: Fax: e mail: url: CENTRO PER LA RICERCA SCIENTIFICA E TECNOLOGICA 38050 Povo (Trento), Italy Tel.: +39 0461 314312 Fax: +39 0461 302040 e mail: prdoc@itc.it url: http://www.itc.it HISTORY DEPENDENT AUTOMATA Montanari U.,

More information

ÔÖ Î µ ÛÖ Î Ø Ø Ó ÚÖØ ÖÔ Ø Ø Ó º ØØ Û Ö ÚÒ Ø Ò Ú ¼ ½ Ú ½ ¾ Ú ¾ Ú Ú ½ ÒÒ ÙÒØÓÒ Eº ÏÐ Ò Ø ÖÔ ÕÙÒ Ú ÛÖ Ú ¼ Ú ¾ Î ½ ¾ Ò E µ Ú ½ Ú º Ì ÛÐ ÐÓ Ø Ö Ø Ò Ð Ø ÚÖ

ÔÖ Î µ ÛÖ Î Ø Ø Ó ÚÖØ ÖÔ Ø Ø Ó º ØØ Û Ö ÚÒ Ø Ò Ú ¼ ½ Ú ½ ¾ Ú ¾ Ú Ú ½ ÒÒ ÙÒØÓÒ Eº ÏÐ Ò Ø ÖÔ ÕÙÒ Ú ÛÖ Ú ¼ Ú ¾ Î ½ ¾ Ò E µ Ú ½ Ú º Ì ÛÐ ÐÓ Ø Ö Ø Ò Ð Ø ÚÖ ÙÐÖÒ ÖÔ ÔÖ Î µ ÛÖ Î Ø Ø Ó ÚÖØ ÖÔ Ø Ø Ó º ØØ Û Ö ÚÒ Ø Ò Ú ¼ ½ Ú ½ ¾ Ú ¾ Ú Ú ½ ÒÒ ÙÒØÓÒ Eº ÏÐ Ò Ø ÖÔ ÕÙÒ Ú ÛÖ Ú ¼ Ú ¾ Î ½ ¾ Ò E µ Ú ½ Ú º Ì ÛÐ ÐÓ Ø Ö Ø Ò Ð Ø ÚÖØ ÓÒº ÈØ ÛÐ ÛÖ ÚÖÝ ÚÖØÜ ÓÙÖ Ø ÑÓ Ø ÓÒº ÝÐ ÐÓ

More information

A comparative analysis of subreddit recommenders for Reddit

A comparative analysis of subreddit recommenders for Reddit A comparative analysis of subreddit recommenders for Reddit Jay Baxter Massachusetts Institute of Technology jbaxter@mit.edu Abstract Reddit has become a very popular social news website, but even though

More information

Ë ÁÌÇ ÌÓ Ó ÍÒ Ú Ö Øݵ Ç ¼ Ô Û Ö ÙÒÓ Ø Ò Ð Ä Ò ÙÖ ÖÝ ÓÒ ÒÓØ Ý ÛÓÖ Û Ø Ã ÞÙ ÖÓ Á Ö Ó ÒØ Ë Ò ÝÓ ÍÒ Ú Ö Øݵ Ç

Ë ÁÌÇ ÌÓ Ó ÍÒ Ú Ö Øݵ Ç ¼ Ô Û Ö ÙÒÓ Ø Ò Ð Ä Ò ÙÖ ÖÝ ÓÒ ÒÓØ Ý ÛÓÖ Û Ø Ã ÞÙ ÖÓ Á Ö Ó ÒØ Ë Ò ÝÓ ÍÒ Ú Ö Øݵ Ç Ë ÁÌÇ ÌÓ Ó ÍÒ Ú Ö Øݵ Ç ¼ Ô Û Ö ÙÒÓ Ø Ò Ð Ä Ò ÙÖ ÖÝ ÓÒ ÒÓØ Ý ÛÓÖ Û Ø Ã ÞÙ ÖÓ Á Ö Ó ÒØ Ë Ò ÝÓ ÍÒ Ú Ö Øݵ Ç ½ Ä Ò Ô Ô Ä Ô Õµ Ø ¹Ñ Ò ÓÐ Ó Ø Ò Ý Ä Ò ÓÒ Ø ØÖ Ú Ð ÒÓØ Ò Ë º Ô Õ¹ ÙÖ ÖÝ Ô Õµ¹ÙÖÚ ¾ ÈÖÓ Ð Ñ Ø Ð

More information

Learning and Visualizing Political Issues from Voting Records Erik Goldman, Evan Cox, Mikhail Kerzhner. Abstract

Learning and Visualizing Political Issues from Voting Records Erik Goldman, Evan Cox, Mikhail Kerzhner. Abstract Learning and Visualizing Political Issues from Voting Records Erik Goldman, Evan Cox, Mikhail Kerzhner Abstract For our project, we analyze data from US Congress voting records, a dataset that consists

More information

solutions:, and it cannot be the case that a supersolution is always greater than or equal to a subsolution.

solutions:, and it cannot be the case that a supersolution is always greater than or equal to a subsolution. Chapter 4 Comparison The basic problem to be considered here is the question when one can say that a supersolution is always greater than or equal to a subsolution of a problem, where one in most cases

More information

How hard is it to control sequential elections via the agenda?

How hard is it to control sequential elections via the agenda? How hard is it to control sequential elections via the agenda? Vincent Conitzer Department of Computer Science Duke University Durham, NC 27708, USA conitzer@cs.duke.edu Jérôme Lang LAMSADE Université

More information

Randomized Pursuit-Evasion in Graphs

Randomized Pursuit-Evasion in Graphs Randomized Pursuit-Evasion in Graphs Micah Adler, Harald Räcke ¾, Naveen Sivadasan, Christian Sohler ¾, and Berthold Vöcking ¾ Department of Computer Science University of Massachusetts, Amherst, micah@cs.umass.edu

More information

CS 4407 Algorithms Greedy Algorithms and Minimum Spanning Trees

CS 4407 Algorithms Greedy Algorithms and Minimum Spanning Trees CS 4407 Algorithms Greedy Algorithms and Minimum Spanning Trees Prof. Gregory Provan Department of Computer Science University College Cork 1 Sample MST 6 5 4 9 14 10 2 3 8 15 Greedy Algorithms When are

More information

Nominal Techniques in Isabelle/HOL

Nominal Techniques in Isabelle/HOL Noname manuscript No. (will be inserted by the editor) Nominal Techniques in Isabelle/HOL Christian Urban Received: date / Accepted: date Abstract This paper describes a formalisation of the lambda-calculus

More information

Randomized Pursuit-Evasion in Graphs

Randomized Pursuit-Evasion in Graphs Randomized Pursuit-Evasion in Graphs Micah Adler Harald Räcke Ý Naveen Sivadasan Þ Christian Sohler Ý Berthold Vöcking Þ Abstract We analyze a randomized pursuit-evasion game on graphs. This game is played

More information

Coalitional Game Theory

Coalitional Game Theory Coalitional Game Theory Game Theory Algorithmic Game Theory 1 TOC Coalitional Games Fair Division and Shapley Value Stable Division and the Core Concept ε-core, Least core & Nucleolus Reading: Chapter

More information

ishares Core Composite Bond ETF

ishares Core Composite Bond ETF ishares Core Composite Bond ETF ARSN 154 626 767 ANNUAL FINANCIAL REPORT 30 June 2017 BlackRock Investment Management (Australia) Limited 13 006 165 975 Australian Financial Services Licence No 230523

More information

Political Economics II Spring Lectures 4-5 Part II Partisan Politics and Political Agency. Torsten Persson, IIES

Political Economics II Spring Lectures 4-5 Part II Partisan Politics and Political Agency. Torsten Persson, IIES Lectures 4-5_190213.pdf Political Economics II Spring 2019 Lectures 4-5 Part II Partisan Politics and Political Agency Torsten Persson, IIES 1 Introduction: Partisan Politics Aims continue exploring policy

More information

Hoboken Public Schools. Algebra II Honors Curriculum

Hoboken Public Schools. Algebra II Honors Curriculum Hoboken Public Schools Algebra II Honors Curriculum Algebra Two Honors HOBOKEN PUBLIC SCHOOLS Course Description Algebra II Honors continues to build students understanding of the concepts that provide

More information

Subreddit Recommendations within Reddit Communities

Subreddit Recommendations within Reddit Communities Subreddit Recommendations within Reddit Communities Vishnu Sundaresan, Irving Hsu, Daryl Chang Stanford University, Department of Computer Science ABSTRACT: We describe the creation of a recommendation

More information

The Effectiveness of Receipt-Based Attacks on ThreeBallot

The Effectiveness of Receipt-Based Attacks on ThreeBallot The Effectiveness of Receipt-Based Attacks on ThreeBallot Kevin Henry, Douglas R. Stinson, Jiayuan Sui David R. Cheriton School of Computer Science University of Waterloo Waterloo, N, N2L 3G1, Canada {k2henry,

More information

Michael Laver and Ernest Sergenti: Party Competition. An Agent-Based Model

Michael Laver and Ernest Sergenti: Party Competition. An Agent-Based Model RMM Vol. 3, 2012, 66 70 http://www.rmm-journal.de/ Book Review Michael Laver and Ernest Sergenti: Party Competition. An Agent-Based Model Princeton NJ 2012: Princeton University Press. ISBN: 9780691139043

More information

Ò ÓÛ Æ ØÛÓÖ Ð ÓÖ Ø Ñ ÓÖ ¹ ÙÐ Ö ÓÒ

Ò ÓÛ Æ ØÛÓÖ Ð ÓÖ Ø Ñ ÓÖ ¹ ÙÐ Ö ÓÒ Ò ÓÛ ÆØÛÓÖ ÐÓÖØÑ ÓÖ¹ÙÐÖ ÓÒ ÚÐÙÒ Øµ E µ ÙÚµ Ò Úµ µ E µ ÚÙµ ÐÐ ¹ÒÖ Ò ¹ÓÙØÖ Ó ÚÖØÜ Ú Î Ö Ö ÔØÚÐݺ ÄØ Î µ ÖØ ÖÔº ÓÖ ÚÖØÜ Ú Î Û Ò ÓÙØÖ Úµ Ò Ò Ø ÒÖ Ò Øµ Úµº ÓÖ Úµ Ø ÚÖØÜ Ú ÐÐ ÓÙÖ Úµ Á е ÓÖ Ò ÙÙµ Ó ÖÔ Ö ÔØÚÐݺ

More information

Verification. Lecture 3. Bernd Finkbeiner

Verification. Lecture 3. Bernd Finkbeiner Verification Lecture 3 Bernd Finkbeiner Plan for today CTL model checking Thebasicalgorithm Fairness Counterexamplesandwitnesses Review: Computation tree logic modal logic over infinite trees[clarke& Emerson

More information

A Formal Architecture for the 3APL Agent Programming Language

A Formal Architecture for the 3APL Agent Programming Language A Formal Architecture for the 3APL Agent Programming Language Mark d Inverno, Koen Hindriks Ý, and Michael Luck Þ Ý Þ Cavendish School of Computer Science, 115 New Cavendish Street, University of Westminster,

More information

Web Mining: Identifying Document Structure for Web Document Clustering

Web Mining: Identifying Document Structure for Web Document Clustering Web Mining: Identifying Document Structure for Web Document Clustering by Khaled M. Hammouda A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of

More information

Computational Inelasticity FHLN05. Assignment A non-linear elasto-plastic problem

Computational Inelasticity FHLN05. Assignment A non-linear elasto-plastic problem Computational Inelasticity FHLN05 Assignment 2016 A non-linear elasto-plastic problem General instructions A written report should be submitted to the Division of Solid Mechanics no later than 1 November

More information

Biogeography-Based Optimization Combined with Evolutionary Strategy and Immigration Refusal

Biogeography-Based Optimization Combined with Evolutionary Strategy and Immigration Refusal Biogeography-Based Optimization Combined with Evolutionary Strategy and Immigration Refusal Dawei Du, Dan Simon, and Mehmet Ergezer Department of Electrical and Computer Engineering Cleveland State University

More information

function GENERAL-SEARCH( problem, strategy) returns a solution, or failure initialize the search tree using the initial state of problem loop do if

function GENERAL-SEARCH( problem, strategy) returns a solution, or failure initialize the search tree using the initial state of problem loop do if ØÓ ÖØ Ð ÁÒØ ÐÐ Ò ÁÒØÖÓ ÙØ ÓÒ ¹ ËÔÖ Ò ¾¼½ Ë º ÓÙ ÖÝ Ë Ù¹Û ¹Ö µ ÖØ ¼¾µ ¾¹ ÓÙ ÖÝ ºÙÒк Ù º º ÓÙ ÖÝ ½ ÁÒ ØÖÙØÓÖ³ ÒÓØ ËÓÐÚ Ò ÈÖÓ Ð Ñ Ý Ë Ö Ò Ì ØÐ ÔØ Ö Ë Ø ÓÒ º µ ÁÅ ÛÛÛº ºÙÒк Ù» ÍÊÄ ÍÊÄ ÛÛÛº ºÙÒк Ù» ÓÙ Öݻ˽

More information

½º»¾¼ º»¾¼ ¾º»¾¼ º»¾¼ º»¾¼ º»¾¼ º»¾¼ º»¾¼» ¼» ¼ ÌÓØ Ð»½ ¼

½º»¾¼ º»¾¼ ¾º»¾¼ º»¾¼ º»¾¼ º»¾¼ º»¾¼ º»¾¼» ¼» ¼ ÌÓØ Ð»½ ¼ Ò Ð Ü Ñ Ò Ø ÓÒ ËÌ ½½ ÈÖÓ Ð ØÝ ² Å ÙÖ Ì ÓÖÝ ÌÙ Ý ¾¼½ ½¼ ¼¼ Ñ ß ½¾ ¼¼Ò Ì ÐÓ ¹ ÓÓ Ü Ñ Ò Ø ÓÒº ÓÙ Ñ Ý Ù Ø Ó ÔÖ Ô Ö ÒÓØ ÝÓÙ Û ÙØ ÝÓÙ Ñ Ý ÒÓØ Ö Ñ Ø Ö Ð º Á ÕÙ Ø ÓÒ Ñ Ñ ÙÓÙ ÓÖ ÓÒ Ù Ò ÔÐ Ñ ØÓ Ð Ö Ý Øº ÍÒÐ ÔÖÓ

More information

From Argument Games to Persuasion Dialogues

From Argument Games to Persuasion Dialogues From Argument Games to Persuasion Dialogues Nicolas Maudet (aka Nicholas of Paris) 08/02/10 (DGHRCM workshop) LAMSADE Université Paris-Dauphine 1 / 33 Introduction Main sources of inspiration for this

More information

An Integrated Tag Recommendation Algorithm Towards Weibo User Profiling

An Integrated Tag Recommendation Algorithm Towards Weibo User Profiling An Integrated Tag Recommendation Algorithm Towards Weibo User Profiling Deqing Yang, Yanghua Xiao, Hanghang Tong, Junjun Zhang and Wei Wang School of Computer Science Shanghai Key Laboratory of Data Science

More information

Overview. Ø Neural Networks are considered black-box models Ø They are complex and do not provide much insight into variable relationships

Overview. Ø Neural Networks are considered black-box models Ø They are complex and do not provide much insight into variable relationships Neural Networks Overview Ø s are considered black-box models Ø They are complex and do not provide much insight into variable relationships Ø They have the potential to model very complicated patterns

More information

Classifier Evaluation and Selection. Review and Overview of Methods

Classifier Evaluation and Selection. Review and Overview of Methods Classifier Evaluation and Selection Review and Overview of Methods Things to consider Ø Interpretation vs. Prediction Ø Model Parsimony vs. Model Error Ø Type of prediction task: Ø Decisions Interested

More information

Comparison Sorts. EECS 2011 Prof. J. Elder - 1 -

Comparison Sorts. EECS 2011 Prof. J. Elder - 1 - Comparison Sorts - 1 - Sorting Ø We have seen the advantage of sorted data representations for a number of applications q Sparse vectors q Maps q Dictionaries Ø Here we consider the problem of how to efficiently

More information

Contact 3-Manifolds, Holomorphic Curves and Intersection Theory

Contact 3-Manifolds, Holomorphic Curves and Intersection Theory Contact 3-Manifolds, Holomorphic Curves and Intersection Theory (Durham University, August 2013) Chris Wendl University College London These slides plus detailed lecture notes (in progress) available at:

More information

É ÀÓÛ Ó Ý Ò ² Ö Ò ÁÒ Ö Ò «Ö ÓØ ÑÔ Ù ÔÖÓ Ð ØÝ ØÓ Ö ÙÒ ÖØ ÒØÝ ÙØ Ø Ý ÓÒ Ø ÓÒ ÓÒ «Ö ÒØ Ø Ò º Ü ÑÔÐ ÁÑ Ò Ð Ò Ð ØÖ Ð Û Ø Ò ½ Ñ Ø Ô Ö Ó Ù Ø º ÁÒ Ô Ö ÓÒ Ù Ø

É ÀÓÛ Ó Ý Ò ² Ö Ò ÁÒ Ö Ò «Ö ÓØ ÑÔ Ù ÔÖÓ Ð ØÝ ØÓ Ö ÙÒ ÖØ ÒØÝ ÙØ Ø Ý ÓÒ Ø ÓÒ ÓÒ «Ö ÒØ Ø Ò º Ü ÑÔÐ ÁÑ Ò Ð Ò Ð ØÖ Ð Û Ø Ò ½ Ñ Ø Ô Ö Ó Ù Ø º ÁÒ Ô Ö ÓÒ Ù Ø ËØ Ø Ø Ð È Ö Ñ Ý Ò ² Ö ÕÙ ÒØ Ø ÊÓ ÖØ Ä ÏÓÐÔ ÖØ Ù ÍÒ Ú Ö ØÝ Ô ÖØÑ ÒØ Ó ËØ Ø Ø Ð Ë Ò ¾¼½ Ë Ô ½¼ ÈÖÓ Ñ Ò Ö É ÀÓÛ Ó Ý Ò ² Ö Ò ÁÒ Ö Ò «Ö ÓØ ÑÔ Ù ÔÖÓ Ð ØÝ ØÓ Ö ÙÒ ÖØ ÒØÝ ÙØ Ø Ý ÓÒ Ø ÓÒ ÓÒ «Ö ÒØ Ø Ò º Ü ÑÔÐ ÁÑ

More information

Implementing Domain Specific Languages using Dependent Types and Partial Evaluation

Implementing Domain Specific Languages using Dependent Types and Partial Evaluation Implementing Domain Specific Languages using Dependent Types and Partial Evaluation Edwin Brady eb@cs.st-andrews.ac.uk University of St Andrews EE-PigWeek, January 7th 2010 EE-PigWeek, January 7th 2010

More information

Political Districting for Elections to the German Bundestag: An Optimization-Based Multi-Stage Heuristic Respecting Administrative Boundaries

Political Districting for Elections to the German Bundestag: An Optimization-Based Multi-Stage Heuristic Respecting Administrative Boundaries Political Districting for Elections to the German Bundestag: An Optimization-Based Multi-Stage Heuristic Respecting Administrative Boundaries Sebastian Goderbauer 1 Electoral Districts in Elections to

More information

No Adults Allowed! Unsupervised Learning Applied to Gerrymandered School Districts

No Adults Allowed! Unsupervised Learning Applied to Gerrymandered School Districts No Adults Allowed! Unsupervised Learning Applied to Gerrymandered School Districts Divya Siddarth, Amber Thomas 1. INTRODUCTION With more than 80% of public school students attending the school assigned

More information

PROJECTION OF NET MIGRATION USING A GRAVITY MODEL 1. Laboratory of Populations 2

PROJECTION OF NET MIGRATION USING A GRAVITY MODEL 1. Laboratory of Populations 2 UN/POP/MIG-10CM/2012/11 3 February 2012 TENTH COORDINATION MEETING ON INTERNATIONAL MIGRATION Population Division Department of Economic and Social Affairs United Nations Secretariat New York, 9-10 February

More information

Genetic Algorithms with Elitism-Based Immigrants for Changing Optimization Problems

Genetic Algorithms with Elitism-Based Immigrants for Changing Optimization Problems Genetic Algorithms with Elitism-Based Immigrants for Changing Optimization Problems Shengxiang Yang Department of Computer Science, University of Leicester University Road, Leicester LE1 7RH, United Kingdom

More information

Ó ÔÔÐ Å Ø Ñ Ø ÔÐ Ò Ó Å Ø Ñ Ø Ð Ë Ò Ë ÓÓÐ Ð ØÙÖ ÒØÖÓ Ù Ø ÖÓÙØ Ò ÔÖÓ Ð Ñ Ò Ö ÓÑÑÓÒ ÔÔÖÓ ØÓ Ø ÓÐÙØ ÓÒ Ì Ð ÓÖ Ø Ñµ ÓÖ ÓÖØ Ø¹Ô Ø ÖÓÙØ Ò º ØÖ ³ ÓÑÑÙÒ Ø ÓÒ Æ ØÛÓÖ Ò Ð ØÙÖ ¼ ÊÓÙØ Ò Å ØØ Û ÊÓÙ Ò

More information

Supporting Information Political Quid Pro Quo Agreements: An Experimental Study

Supporting Information Political Quid Pro Quo Agreements: An Experimental Study Supporting Information Political Quid Pro Quo Agreements: An Experimental Study Jens Großer Florida State University and IAS, Princeton Ernesto Reuben Columbia University and IZA Agnieszka Tymula New York

More information

Comparison of Multi-stage Tests with Computerized Adaptive and Paper and Pencil Tests. Ourania Rotou Liane Patsula Steffen Manfred Saba Rizavi

Comparison of Multi-stage Tests with Computerized Adaptive and Paper and Pencil Tests. Ourania Rotou Liane Patsula Steffen Manfred Saba Rizavi Comparison of Multi-stage Tests with Computerized Adaptive and Paper and Pencil Tests Ourania Rotou Liane Patsula Steffen Manfred Saba Rizavi Educational Testing Service Paper presented at the annual meeting

More information

Accept() Reject() Connect() Connect() Above Threshold. Threshold. Below Threshold. Connection A. Connection B. Time. Activity (cells/unit time) CAC

Accept() Reject() Connect() Connect() Above Threshold. Threshold. Below Threshold. Connection A. Connection B. Time. Activity (cells/unit time) CAC Ú ÐÙ Ø Ò Å ÙÖ Ñ Òع Ñ ÓÒ ÓÒØÖÓÐ Ò Ö Û ÅÓÓÖ Å Ú ÐÙ Ø ÓÒ Ò Ö Û ÅÓÓÖ ½ ÐÐ Ñ ÓÒ ÓÒØÖÓÐ ÅÓ Ð ß Ö Ø ÓÖ ÙÒ Ö ØÓÓ ØÖ Æ ÓÙÖ ß ÒÓØ Ö Ø ÓÖ «Ö ÒØ ØÖ Æ ÓÙÖ Å ÙÖ Ñ ÒØ ß ÛÓÖ ÓÖ ÒÝ ØÖ Æ ÓÙÖ ß ÙØ Û Å ØÓ Ù Ç Ø Ú Ú ÐÙ Ø

More information

Estimating the Margin of Victory for Instant-Runoff Voting*

Estimating the Margin of Victory for Instant-Runoff Voting* Estimating the Margin of Victory for Instant-Runoff Voting* David Cary v7 * also known as Ranked-Choice Voting, preferential voting, and the alternative vote 1 Why estimate? Overview What are we talking

More information

Predicting Information Diffusion Initiated from Multiple Sources in Online Social Networks

Predicting Information Diffusion Initiated from Multiple Sources in Online Social Networks Predicting Information Diffusion Initiated from Multiple Sources in Online Social Networks Chuan Peng School of Computer science, Wuhan University Email: chuan.peng@asu.edu Kuai Xu, Feng Wang, Haiyan Wang

More information

Tensor. Field. Vector 2D Length. SI BG cgs. Tensor. Units. Template. DOFs u v. Distribution Functions. Domain

Tensor. Field. Vector 2D Length. SI BG cgs. Tensor. Units. Template. DOFs u v. Distribution Functions. Domain ÁÒØÖÓ ÙØ ÓÒ ØÓ Ø ÁÌ ÈË Ð ÁÒØ Ö ÖÐ ÇÐÐ Ú Ö¹ ÓÓ Ì ÍÒ Ú Ö ØÝ Ó Ö Ø ÓÐÙÑ Å Ö Å ÐÐ Ö Ä ÛÖ Ò Ä Ú ÖÑÓÖ Æ Ø ÓÒ Ð Ä ÓÖ ØÓÖÝ Ò Ð ÐÓÒ Ö Ê Ò Ð Ö ÈÓÐÝØ Ò ÁÒ Ø ØÙØ ¾¼½½ ËÁ Å Ë ÓÒ Ö Ò Ê ÒÓ Æ Ú Å Ö ¾¼½½ ÇÐÐ Ú Ö¹ ÓÓ Å

More information

MSR, Access Control, and the Most Powerful Attacker

MSR, Access Control, and the Most Powerful Attacker MSR, Access Control, and the Most Powerful Attacker Iliano Cervesato Advanced Engineering and Sciences Division ITT Industries, Inc. 2560 Huntington Avenue, Alexandria, VA 22303-1410 USA Tel.: +1-202-404-4909,

More information

function GENERAL-SEARCH( problem, strategy) returns a solution, or failure initialize the search tree using the initial state of problem loop do if

function GENERAL-SEARCH( problem, strategy) returns a solution, or failure initialize the search tree using the initial state of problem loop do if ÓÙ ÖÝ ½ ÁÒ ØÖÙØÓÖ³ ÒÓØ ÓÙ ÖÝ ¾ ÁÒ ØÖÙØÓÖ³ ÒÓØ ØÓ ÖØ Ð ÁÒØ ÐÐ Ò ÁÒØÖÓ ÙØ ÓÒ ¹ ËÔÖ Ò ¾¼¼ Ë ÍÊÄ ÛÛÛº ºÙÒк Ù» ÓÙ Öݻ˼ ¹ ¹ º ÓÙ ÖÝ Ë Ù¹Û ¹Ö µ ÖØ ¼¾µ ¾¹ ÓÙ ÖÝ ºÙÒк Ù ØÖ Ø Ý Ò Ý Ô Ò Ø ÓÖ Ö Ó ÒÓ ÜÔ Ò ÓÒ Ì ØÐ

More information

Reputation-Based Trust Management (extended abstract)

Reputation-Based Trust Management (extended abstract) Reputation-Based Trust Management (extended abstract) Vitaly Shmatikov and Carolyn Talcott Computer Science Laboratory, SRI International, Menlo Park, CA 94025 USA shmat,clt @csl.sri.com Abstract. We propose

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Linearly Separable Data SVM: Simple Linear Separator hyperplane Which Simple Linear Separator? Classifier Margin Objective #1: Maximize Margin MARGIN MARGIN How s this look? MARGIN

More information

ÈÖÓÚ Ò Ò ÁÑÔÐ Ø ÓÒ È É Ï Ö Ø ÐÓÓ Ø Û Ý ØÓ ÔÖÓÚ Ø Ø Ñ ÒØ Ó Ø ÓÖÑ Á È Ø Ò É ÓÖ È É Ì ÓÐÐÓÛ Ò ÔÖÓÓ ØÝÔ Ò Ð Ó Ù ØÓ ÔÖÓÚ Ø Ø Ñ ÒØ Ó Ø ÓÖÑ Ü È Üµ É Üµµ Ý ÔÔ

ÈÖÓÚ Ò Ò ÁÑÔÐ Ø ÓÒ È É Ï Ö Ø ÐÓÓ Ø Û Ý ØÓ ÔÖÓÚ Ø Ø Ñ ÒØ Ó Ø ÓÖÑ Á È Ø Ò É ÓÖ È É Ì ÓÐÐÓÛ Ò ÔÖÓÓ ØÝÔ Ò Ð Ó Ù ØÓ ÔÖÓÚ Ø Ø Ñ ÒØ Ó Ø ÓÖÑ Ü È Üµ É Üµµ Ý ÔÔ Å Ø Ó Ó ÈÖÓÓ ÊÙÐ Ó ÁÒ Ö Ò ¹ Ø ØÖÙØÙÖ Ó ÔÖÓÓ ÆÓÛ ËØÖ Ø ÓÖ ÓÒ ØÖÙØ Ò ÔÖÓÓ ÁÒØÖÓ ÙØ ÓÒ ØÓ ÓÑÑÓÒ ÔÖÓÓ Ø Ò ÕÙ Ê ÐÐ Ø Ø Ñ ÒØ ÒØ Ò Ø Ø Ø Ö ØÖÙ ÓÖ Ð º Ò Ø ÓÒ ÔÖÓÓ ÓÒÚ Ò Ò Ö ÙÑ ÒØ Ø Ø Ø Ø Ñ ÒØ ØÖÙ º ÆÓØ Ï ÒÒÓØ

More information

x 2 x 1 f 1 Objective space Decision space

x 2 x 1 f 1 Objective space Decision space Ò ÐÝÞ Ò Ø Ø Ó Ç Ø Ú ÓÖÖ Ð Ø ÓÒ ÓÒ Ø ÒØ Ë Ø Ó ÅÆÃ¹Ä Ò Ô Ë Ø Ò Î Ö Ð ÖÒ Ù Ä ÓÓ Ä Ø Ø ÂÓÙÖ Ò Ð Ö Ò Ò ÍÒ Ú Ö ØÝ Ó Æ ËÓÔ ÒØ ÔÓÐ ÆÊË Ö Ò ÍÒ Ú Ö Ø Ä ÐÐ ½ ÄÁ Ä ÆÊË Ö Ò ÁÆÊÁ Ä ÐÐ ¹ÆÓÖ ÙÖÓÔ Ö Ò Ø ÒºÚ Ö Ð ÒÖ º Ö

More information

ÓÖ Ö ÛÓÖ Ò Ô Ö Ó ØÝ Ò Ø ÛÓÖ ÓÖ Ö Ø ÔÖÓÔ Ö ÔÖ Ü ÕÙ Ð ØÓ Ù Üº ÓÖ Ü ÑÔÐ ÓÖ Ö º Á ÛÓÖ ÒÓØ ÓÖ Ö Û Ý Ø ÙÒ ÓÖ Ö ÓÖ ÓÖ Ö¹ Ö º ÓÖ Ü ÑÔÐ ½¼ Ò = ½¼¼ ¼ Ö ÙÒ ÓÖ Ö

ÓÖ Ö ÛÓÖ Ò Ô Ö Ó ØÝ Ò Ø ÛÓÖ ÓÖ Ö Ø ÔÖÓÔ Ö ÔÖ Ü ÕÙ Ð ØÓ Ù Üº ÓÖ Ü ÑÔÐ ÓÖ Ö º Á ÛÓÖ ÒÓØ ÓÖ Ö Û Ý Ø ÙÒ ÓÖ Ö ÓÖ ÓÖ Ö¹ Ö º ÓÖ Ü ÑÔÐ ½¼ Ò = ½¼¼ ¼ Ö ÙÒ ÓÖ Ö Ð Ò ÓÖ Ö ØÓÖ Ò Ô Ö Ó ØÝ Ñ Ð ÖÐ Ö ÂÓ ÒØ ÛÓÖ Û Ø Ì ÖÓ À Ö Ù ËÚ ØÐ Ò ÈÙÞÝÒ Ò Ò ÄÙ Ñ ÓÒ µ Ö Ø Å Ø Ñ Ø Ý ¹ Ä ¹ ¾¼½  ÒÙ ÖÝ Ø ÓÖ Ö ÛÓÖ Ò Ô Ö Ó ØÝ Ò Ø ÛÓÖ ÓÖ Ö Ø ÔÖÓÔ Ö ÔÖ Ü ÕÙ Ð ØÓ Ù Üº ÓÖ Ü ÑÔÐ ÓÖ Ö º Á ÛÓÖ

More information

ØÖ Ø Ì ÈÐ ÒØ Ò ÙÒ Ð ÌÖ Ó Ä È ÌÇĵ ÔÖÓ Ø Ñ ØÓ Ò Ö Ø ÑÙÐØ Ò Ô ÝÐÓ Ò Ö ÔÖ ÒØ Ò ÐÐ Ò Ö Ó Ø ÔÐ ÒØ Ò ÙÒ Ð Ò ÓÑ º ÓÖ ÔÐ ÒØ Ø Ó ÐÓ Ø Ø ½µ ÓÙÖ Ò Û Ö Ò Ó ÔÐ ÒØ

ØÖ Ø Ì ÈÐ ÒØ Ò ÙÒ Ð ÌÖ Ó Ä È ÌÇĵ ÔÖÓ Ø Ñ ØÓ Ò Ö Ø ÑÙÐØ Ò Ô ÝÐÓ Ò Ö ÔÖ ÒØ Ò ÐÐ Ò Ö Ó Ø ÔÐ ÒØ Ò ÙÒ Ð Ò ÓÑ º ÓÖ ÔÐ ÒØ Ø Ó ÐÓ Ø Ø ½µ ÓÙÖ Ò Û Ö Ò Ó ÔÐ ÒØ À Ì ÖÓÙ ÔÙØ ÅÙÐØ Ò Ò ÐÝ ÌÓÓÐ ÓÖ Ì Ð Ò Ø ÈÐ ÒØ Ò ÙÒ Ð ÌÖ Ó Ä ÂÌ Ã Ñ Î Ö Ö Ö Ö Ä ÓØ Ù Ö Û Ö Ë Ó ÛÓÖØ ÏÄ Ö Ö Ø Æ Ô Ø Û Ð ÓÖ Ø Ý Á Ä Ø Ç Å ÙÖ Ò Ì Æ Ò Ò Ä ÈÓ ÓÖÒÝ ÏÂ Ö º Ñ ÛºÓÖ Ù Ù Ø ¾¼½ ØÖ Ø Ì ÈÐ ÒØ Ò ÙÒ Ð

More information

Estimating the Margin of Victory for Instant-Runoff Voting

Estimating the Margin of Victory for Instant-Runoff Voting Estimating the Margin of Victory for Instant-Runoff Voting David Cary Abstract A general definition is proposed for the margin of victory of an election contest. That definition is applied to Instant Runoff

More information

LET Õ Ò µ denote the maximum size of a Õ-ary code

LET Õ Ò µ denote the maximum size of a Õ-ary code 1 Long Nonbinary Codes Exceeding the Gilbert-Varshamov bound for Any Fixed Distance Sergey Yekhanin Ilya Dumer Abstract Let Õ µ denote the maximum size of a Õ- ary code of length and distance We study

More information

Designing police patrol districts on street network

Designing police patrol districts on street network Designing police patrol districts on street network Huanfa Chen* 1 and Tao Cheng 1 1 SpaceTimeLab for Big Data Analytics, Department of Civil, Environmental, and Geomatic Engineering, University College

More information

Random Forests. Gradient Boosting. and. Bagging and Boosting

Random Forests. Gradient Boosting. and. Bagging and Boosting Random Forests and Gradient Boosting Bagging and Boosting The Bootstrap Sample and Bagging Simple ideas to improve any model via ensemble Bootstrap Samples Ø Random samples of your data with replacement

More information

Name Phylogeny. A Generative Model of String Variation. Nicholas Andrews, Jason Eisner and Mark Dredze

Name Phylogeny. A Generative Model of String Variation. Nicholas Andrews, Jason Eisner and Mark Dredze Name Phylogeny A Generative Model of String Variation Nicholas Andrews, Jason Eisner and Mark Dredze Department of Computer Science, Johns Hopkins University EMNLP 2012 Thursday, July 12 Outline Introduction

More information

ÙÒØ ÓÒ Ò Ø ÓÒ ÙÒØ ÓÒ ÖÓÑ ØÓ ÒÓØ Ö Ð Ø ÓÒ ÖÓÑ ØÓ Ù Ø Ø ÓÖ Ú ÖÝ Ü ¾ Ø Ö ÓÑ Ý ¾ Ù Ø Ø Ü Ýµ Ò Ø Ö Ð Ø ÓÒ Ò Ü Ýµ Ò Ü Þµ Ö Ò Ø Ö Ð Ø ÓÒ Ø Ò Ý Þº ÆÓØ Ø ÓÒ Á

ÙÒØ ÓÒ Ò Ø ÓÒ ÙÒØ ÓÒ ÖÓÑ ØÓ ÒÓØ Ö Ð Ø ÓÒ ÖÓÑ ØÓ Ù Ø Ø ÓÖ Ú ÖÝ Ü ¾ Ø Ö ÓÑ Ý ¾ Ù Ø Ø Ü Ýµ Ò Ø Ö Ð Ø ÓÒ Ò Ü Ýµ Ò Ü Þµ Ö Ò Ø Ö Ð Ø ÓÒ Ø Ò Ý Þº ÆÓØ Ø ÓÒ Á ÙÒØ ÓÒ Ò Ø ÓÒ ÙÒØ ÓÒ ÖÓÑ ØÓ ÒÓØ Ö Ð Ø ÓÒ ÖÓÑ ØÓ Ù Ø Ø ÓÖ Ú ÖÝ Ü ¾ Ø Ö ÓÑ Ý ¾ Ù Ø Ø Ü Ýµ Ò Ø Ö Ð Ø ÓÒ Ò Ü Ýµ Ò Ü Þµ Ö Ò Ø Ö Ð Ø ÓÒ Ø Ò Ý Þº ÆÓØ Ø ÓÒ Á Ü Ýµ Ò Ø Ö Ð Ø ÓÒ Û ÛÖ Ø Üµ ݺ Ì Ø Ø ÓÑ Ò Ó Ø ÙÒØ ÓÒ

More information

Analysis of the Reputation System and User Contributions on a Question Answering Website: StackOverflow

Analysis of the Reputation System and User Contributions on a Question Answering Website: StackOverflow Analysis of the Reputation System and User Contributions on a Question Answering Website: StackOverflow Dana Movshovitz-Attias Yair Movshovitz-Attias Peter Steenkiste Christos Faloutsos August 27, 2013

More information

Vote Compass Methodology

Vote Compass Methodology Vote Compass Methodology 1 Introduction Vote Compass is a civic engagement application developed by the team of social and data scientists from Vox Pop Labs. Its objective is to promote electoral literacy

More information

MODELLING OF GAS-SOLID TURBULENT CHANNEL FLOW WITH NON-SPHERICAL PARTICLES WITH LARGE STOKES NUMBERS

MODELLING OF GAS-SOLID TURBULENT CHANNEL FLOW WITH NON-SPHERICAL PARTICLES WITH LARGE STOKES NUMBERS MODELLING OF GAS-SOLID TURBULENT CHANNEL FLOW WITH NON-SPHERICAL PARTICLES WITH LARGE STOKES NUMBERS Ö Ò Ú Ò Ï Ñ ÓÖ Å ÐÐÓÙÔÔ Ò Ó Å Ö Ò Ø ÛÒÝ Ó Ø Ø ÓÒ È½¼¼ ÇØÓ Ö ½ ¾¼½½ Ö Ò Ú Ò Ï Ñ ÁÑÔ Ö Ð ÓÐÐ µ ÆÓÒ¹ Ô

More information

Title: Adverserial Search AIMA: Chapter 5 (Sections 5.1, 5.2 and 5.3)

Title: Adverserial Search AIMA: Chapter 5 (Sections 5.1, 5.2 and 5.3) B.Y. Choueiry 1 Instructor s notes #9 Title: dverserial Search IM: Chapter 5 (Sections 5.1, 5.2 and 5.3) Introduction to rtificial Intelligence CSCE 476-876, Fall 2017 URL: www.cse.unl.edu/ choueiry/f17-476-876

More information

Ä ÖÒ Ò ÖÓÑ Ø Ö Ëº Ù¹ÅÓ Ø Ð ÓÖÒ ÁÒ Ø ØÙØ Ó Ì ÒÓÐÓ Ý Ä ØÙÖ ½ Ì Ä ÖÒ Ò ÈÖÓ Ð Ñ ËÔÓÒ ÓÖ Ý ÐØ ³ ÈÖÓÚÓ Ø Ç ² Ë Ú ÓÒ Ò ÁËÌ ÌÙ Ý ÔÖ Ð ¾¼½¾

Ä ÖÒ Ò ÖÓÑ Ø Ö Ëº Ù¹ÅÓ Ø Ð ÓÖÒ ÁÒ Ø ØÙØ Ó Ì ÒÓÐÓ Ý Ä ØÙÖ ½ Ì Ä ÖÒ Ò ÈÖÓ Ð Ñ ËÔÓÒ ÓÖ Ý ÐØ ³ ÈÖÓÚÓ Ø Ç ² Ë Ú ÓÒ Ò ÁËÌ ÌÙ Ý ÔÖ Ð ¾¼½¾ ÇÙØÐ Ò Ó Ø ÓÙÖ ½½º ÇÚ Ö ØØ Ò Å Ý µ ½¾º Ê ÙÐ Ö Þ Ø ÓÒ Å Ý ½¼ µ ½º Ì Ä ÖÒ Ò ÈÖÓ Ð Ñ ÔÖ Ð µ ½ º Î Ð Ø ÓÒ Å Ý ½ µ ¾º Á Ä ÖÒ Ò Ð ÔÖ Ð µ º Ì Ä Ò Ö ÅÓ Ð Á ÔÖ Ð ½¼ µ º ÖÖÓÖ Ò ÆÓ ÔÖ Ð ½¾ µ º ÌÖ Ò Ò Ú Ö Ù Ì Ø Ò

More information

Ö Ô ÓÒ Ø Ó ØÛÓ Ø Î Ò ÒÓØ Ý Î µº Ë Ø Î Ò Ø ÒÓÒ¹ ÑÔØÝ Ø Ó Ú ÖØ ÓÖ ÒÓ µ Ò Ø Ó Ô Ö Ó Ú ÖØ ÐÐ º Ï Ù Î µ Ò µ ØÓ Ö ÔÖ ÒØ Ø Ø Ó Ú ÖØ Ò Ò Ö Ô Ö Ô Ø Ú Ðݺ ÅÓÖ Ò

Ö Ô ÓÒ Ø Ó ØÛÓ Ø Î Ò ÒÓØ Ý Î µº Ë Ø Î Ò Ø ÒÓÒ¹ ÑÔØÝ Ø Ó Ú ÖØ ÓÖ ÒÓ µ Ò Ø Ó Ô Ö Ó Ú ÖØ ÐÐ º Ï Ù Î µ Ò µ ØÓ Ö ÔÖ ÒØ Ø Ø Ó Ú ÖØ Ò Ò Ö Ô Ö Ô Ø Ú Ðݺ ÅÓÖ Ò Ö Ô Ð ÓÖ Ø Ñ ÁÒ ½ ÙÐ Ö Ú Ø Ø ÖÒ ÈÖÙ Ò ÃÓ Ò Ö Ò ÓÙÒ Ø Ö Û Ö Ú Ò Ö ÖÓ Ø Ö Ú Ö Ò Ò Ð Ò º À Ñ ÙÔ ÕÙ Ø ÓÒ Ø Ø Ò ÒÝÓÒ Ø ÖØ Ø ÒÝ Ð Ò µ Û Ð Ø ÖÓÙ Ü ØÐÝ ÓÒ ÓÖ Ö Ò Ö ØÙÖÒ ØÓ Ø ÓÖ ¹ Ò Ð Ø ÖØ Ò ÔÓ ÒØ Ê Ö ØÓ ÙÖ ½º

More information

Ø Ñ Ò Ò ÙØÙÑÒ ¾¼¼¾ Ò Ò Ö ÕÙ ÒØ ÐÓ µ Ø Û Ø ØÖ ØÖÙØÙÖ ½ ȹØÖ È¹ ÖÓÛØ ÄÇË Ì È¹ØÖ Ø ØÖÙØÙÖ È¹ ÖÓÛØ Ð ÓÖ Ø Ñ ÓÖ Ò Ò ÐÐ Ö ÕÙ ÒØ Ø ÄÇË Ì Ð ÓÖ Ø Ñ ÓÖ Ò Ò Ö ÕÙ

Ø Ñ Ò Ò ÙØÙÑÒ ¾¼¼¾ Ò Ò Ö ÕÙ ÒØ ÐÓ µ Ø Û Ø ØÖ ØÖÙØÙÖ ½ ȹØÖ È¹ ÖÓÛØ ÄÇË Ì È¹ØÖ Ø ØÖÙØÙÖ È¹ ÖÓÛØ Ð ÓÖ Ø Ñ ÓÖ Ò Ò ÐÐ Ö ÕÙ ÒØ Ø ÄÇË Ì Ð ÓÖ Ø Ñ ÓÖ Ò Ò Ö ÕÙ Ø Ñ Ò Ò ÙØÙÑÒ ¾¼¼¾ Ò Ò Ö ÕÙ ÒØ ÐÓ µ Ø Û Ø ØÖ ØÖÙØÙÖ ½ Ö ÕÙ ÒØ ÐÓ µ Ø Û Ø Ò Ò ØÖÙØÙÖ ØÖ Ø Ñ Ò Ò ÙØÙÑÒ ¾¼¼¾ Ò Ò Ö ÕÙ ÒØ ÐÓ µ Ø Û Ø ØÖ ØÖÙØÙÖ ½ ȹØÖ È¹ ÖÓÛØ ÄÇË Ì È¹ØÖ Ø ØÖÙØÙÖ È¹ ÖÓÛØ Ð ÓÖ Ø Ñ ÓÖ Ò Ò ÐÐ

More information

DYNAMIC RISK MANAGEMENT IN ELECTRICITY PORTFOLIO OPTIMIZATION VIA POLYHEDRAL RISK FUNCTIONALS

DYNAMIC RISK MANAGEMENT IN ELECTRICITY PORTFOLIO OPTIMIZATION VIA POLYHEDRAL RISK FUNCTIONALS DYNAMIC RISK MANAGEMENT IN ELECTRICITY PORTFOLIO OPTIMIZATION VIA POLYHEDRAL RISK FUNCTIONALS Andreas Eichhorn Department of Mathematics Humboldt University 199 Berlin, Germany Email eichhorn@math.hu-berlin.de

More information

Quant 101 Learn2Quant HK, 14 September Vinesh Jha CEO, ExtractAlpha

Quant 101 Learn2Quant HK, 14 September Vinesh Jha CEO, ExtractAlpha Quant 101 Learn2Quant HK, 14 September 2017 Vinesh Jha CEO, ExtractAlpha Data-driven investment is the future ExtractAlpha confidential. Do not copy or distribute. 2 Agenda Ø Quick intro to ExtractAlpha

More information

FOURIER ANALYSIS OF THE NUMBER OF PUBLIC LAWS David L. Farnsworth, Eisenhower College Michael G. Stratton, GTE Sylvania

FOURIER ANALYSIS OF THE NUMBER OF PUBLIC LAWS David L. Farnsworth, Eisenhower College Michael G. Stratton, GTE Sylvania FOURIER ANALYSIS OF THE NUMBER OF PUBLIC LAWS 1789-1976 David L. Farnsworth, Eisenhower College Michael G. Stratton, GTE Sylvania 1. Introduction. In an earlier study (reference hereafter referred to as

More information

Economics Marshall High School Mr. Cline Unit One BC

Economics Marshall High School Mr. Cline Unit One BC Economics Marshall High School Mr. Cline Unit One BC Political science The application of game theory to political science is focused in the overlapping areas of fair division, or who is entitled to what,

More information

Dynamic Political Choice in Macroeconomics.

Dynamic Political Choice in Macroeconomics. Dynamic Political Choice in Macroeconomics. John Hassler, Kjetil Storesletten, and Fabrizio Zilibotti August 2002 Abstract We analyze positive theories of redistribution, social insurance and public good

More information

Complexity of Manipulating Elections with Few Candidates

Complexity of Manipulating Elections with Few Candidates Complexity of Manipulating Elections with Few Candidates Vincent Conitzer and Tuomas Sandholm Computer Science Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 {conitzer, sandholm}@cs.cmu.edu

More information

Ì ÄÈ Ë ÈÖÓ Ð Ñ Ì ÄÈ Ë ÐÓÒ Ø Ô Ö Ñ Ø Ö Þ ÓÑÑÓÒ Ù ÕÙ Ò µ ÔÖÓ Ð Ñ Ò Ö Ð Þ Ø ÓÒ Ó Û ÐÐ ÒÓÛÒ Ä Ë ÔÖÓ Ð Ñ ÓÒØ Ò Ò Ô¹ÓÒ ØÖ ÒØ º Ò Ø ÓÒ ÁÒ ÄÈ Ë(,, Ã ½, Ã ¾, )

Ì ÄÈ Ë ÈÖÓ Ð Ñ Ì ÄÈ Ë ÐÓÒ Ø Ô Ö Ñ Ø Ö Þ ÓÑÑÓÒ Ù ÕÙ Ò µ ÔÖÓ Ð Ñ Ò Ö Ð Þ Ø ÓÒ Ó Û ÐÐ ÒÓÛÒ Ä Ë ÔÖÓ Ð Ñ ÓÒØ Ò Ò Ô¹ÓÒ ØÖ ÒØ º Ò Ø ÓÒ ÁÒ ÄÈ Ë(,, à ½, à ¾, ) Ð ÓÖ Ø Ñ ÓÖ ÓÑÔÙØ Ò Ø ÄÓÒ Ø È Ö Ñ Ø Ö Þ ÓÑÑÓÒ ËÙ ÕÙ Ò Ó Ø Ëº ÁÐ ÓÔÓÙÐÓ ½ Å Ö Ò ÃÙ ¾ ź ËÓ Ð Ê Ñ Ò ½ Ò ÌÓÑ Þ Ï Ð ¾ ½ Ð ÓÖ Ø Ñ Ò ÖÓÙÔ Ô ÖØÑ ÒØ Ó ÓÑÔÙØ Ö Ë Ò Ã Ò ÓÐÐ ÄÓÒ ÓÒ ¾ ÙÐØÝ Ó Å Ø Ñ Ø ÁÒ ÓÖÑ Ø Ò ÔÔÐ

More information

ÙÖ ¾ Ë Ð Ø ÔÔÐ Ø ÓÒ ¾ ¾

ÙÖ ¾ Ë Ð Ø ÔÔÐ Ø ÓÒ ¾ ¾ Å Ë ¹ Í Ö Ù Ú¼º¾ ÔÖ Ð ½¾ ¾¼½¼ ½ ½º½ ÈÖÓ Ø ÉÙÓØ Ì ÕÙÓØ Ð Ø Ò Ö ÐÐÝ ÓÖ Ö Ý Ô Ö Ó Û Ø Ø Ò Û Ø Ø Ø ÓØØÓѺ ÁØ Ñ Ý ÐØ Ö Ý Ð Ø Ò Ò ÔÔÐ Ø ÓÒº ½º½º½ ÉÙÓØ ÉÙÓØ Ò ÔÔÐ ØÓ Ö ÕÙ Ø Ý Ð Ò Ø ÓÒ Ò Ø ÐÐÓ Ø ¹ÓÐÙÑÒ Û Ý ÙÐØ

More information

Exposure-Resilience for Free: The Hierarchical ID-based Encryption Case

Exposure-Resilience for Free: The Hierarchical ID-based Encryption Case Exposure-Resilience for Free: The Hierarchical ID-based Encryption Case Yevgeniy Dodis Department of Computer Science New York University Email: dodis@cs.nyu.edu Moti Yung Department of Computer Science

More information

Charm Physics at the Tevatron

Charm Physics at the Tevatron Charm Physics at the Tevatron Ò Ö ÃÓÖÒ ÁÒ Ø ØÙØ Ó Ì ÒÓÐÓ Ý Å Ù ØØ ÓÖÒÑ Øº Ù ÓÖ Ø Ò ÓÐÐ ÓÖ Ø ÓÒ Ê ÒÓÒØÖ ÅÓÖ ÓÒ ÎÁÁÁØ ¾ Ö ¹¾ Ø ¾¼¼ Å Ö Ô Ô ÓÐÐ Ö Ø Ô ½ Ì Î ¾ Ò Ö Ð ÔÙÖÔÓ Ø ØÓÖ Ò Ö ÓÖ ½ ¼ Ô ½ ½¼¼ Ô ½ Ì Ú ØÖÓÒ

More information

Chapter. Estimating the Value of a Parameter Using Confidence Intervals Pearson Prentice Hall. All rights reserved

Chapter. Estimating the Value of a Parameter Using Confidence Intervals Pearson Prentice Hall. All rights reserved Chapter 9 Estimating the Value of a Parameter Using Confidence Intervals 2010 Pearson Prentice Hall. All rights reserved Section 9.1 The Logic in Constructing Confidence Intervals for a Population Mean

More information

A procedure to compute a probabilistic bound for the maximum tardiness using stochastic simulation

A procedure to compute a probabilistic bound for the maximum tardiness using stochastic simulation Proceedings of the 17th World Congress The International Federation of Automatic Control A procedure to compute a probabilistic bound for the maximum tardiness using stochastic simulation Nasser Mebarki*.

More information

ÇÙØÐ Ò

ÇÙØÐ Ò ÀÓÛ ÑÙ ÒØ Ö Ò Ö Ø ÓÒ Ð Ö Ö Ò Ó Ø ÍºËº Ó Ð ÙÖ ØÝ Ý Ø Ñ Ö ÐÐÝ ÔÖÓÚ ½ ½ Ê ¹ Á ÈÖ Ù Å Ý ¾¼½½ ÇÙØÐ Ò ÅÓØ Ú Ø ÓÒ ÓÒÓÑ Ó Ø Ö ÒØ Ò Ö Ø ÓÒ Ö ÒØÐÝ Ä Ñ Ø Ð ØÝ ØÓ Ò ÙÖ Ü¹ ÒØ Ú ¹ ¹Ú ÓØ Ö Ò Ö Ø ÓÒ È Ý¹ ¹ÝÓÙ¹ Ó Ô Ò ÓÒ

More information

Liveness: The Readers / Writers Problem

Liveness: The Readers / Writers Problem Liveness: The Readers / Writers Problem Admin stuff: Minute paper for concurrency revision lecture Please take one, fill out in 1st 5min & return to box at front by end of lecture Labs week 4 review: event

More information

Check off these skills when you feel that you have mastered them. Identify if a dictator exists in a given weighted voting system.

Check off these skills when you feel that you have mastered them. Identify if a dictator exists in a given weighted voting system. Chapter Objectives Check off these skills when you feel that you have mastered them. Interpret the symbolic notation for a weighted voting system by identifying the quota, number of voters, and the number

More information

ÈÐ Ò ÁÒØÖÓ ÙØ ÓÒ ½ ÁÒØÖÓ ÙØ ÓÒ Ö ÙÑÙÐ ÒØ Ò Ã ÖÓÚ³ ÔÓÐÝÒÓÑ Ð ÓÑ Ò ØÓÖ Ð ÓÖÑÙÐ ÓÖ Ö Ø Ö

ÈÐ Ò ÁÒØÖÓ ÙØ ÓÒ ½ ÁÒØÖÓ ÙØ ÓÒ Ö ÙÑÙÐ ÒØ Ò Ã ÖÓÚ³ ÔÓÐÝÒÓÑ Ð ÓÑ Ò ØÓÖ Ð ÓÖÑÙÐ ÓÖ Ö Ø Ö ÓÑ Ò ØÓÖ Ð ÔÔÖÓ Ó Ö ÔÖ ÒØ Ø ÓÒ Ó ÝÑÑ ØÖ ÖÓÙÔ ÔÔÐ Ø ÓÒ ØÓ Ã ÖÓÚ³ ÔÓÐÝÒÓÑ Ð ÈÀ ØÙ ÒØ Ó È Ð ÔÔ Ò Ä ÓÖ ØÓ Ö ³ÁÒ ÓÖÑ Ø ÕÙ Ô Ö ÅÓÒ È Ö Ø Å ÖÒ ¹Ä ¹Î ÐÐ Ë ÔØ Ñ Ö ½ Ø ¾¼¼ Ð ÁÒ Ø ØÙØ ÌÓÖÓÒØÓ ÈÐ Ò ÁÒØÖÓ ÙØ ÓÒ ½ ÁÒØÖÓ

More information

Benchmarks for text analysis: A response to Budge and Pennings

Benchmarks for text analysis: A response to Budge and Pennings Electoral Studies 26 (2007) 130e135 www.elsevier.com/locate/electstud Benchmarks for text analysis: A response to Budge and Pennings Kenneth Benoit a,, Michael Laver b a Department of Political Science,

More information

Ò ÐÝ º Ê Ö ÓÒ ØÖ ÙØ ÓÒ Ó ÇÆ ½µ Ì ÓÙØÓÑ Ù Ð µ Ú Ö Ð Ö ÔÓÒ Ö ÔÓÒ µ Ú Ö Ð Ô Ò ÒØ Ò µ Ú Ö Ð Ú Ö Ð Y Ö Ð Ø ØÓ ÇÆ ÇÊ ÅÇÊ ÜÔÐ Ò ØÓÖÝ ÓÖ Ð Ö Ò µ Ú Ö Ð Ò Ô Ò Ò

Ò ÐÝ º Ê Ö ÓÒ ØÖ ÙØ ÓÒ Ó ÇÆ ½µ Ì ÓÙØÓÑ Ù Ð µ Ú Ö Ð Ö ÔÓÒ Ö ÔÓÒ µ Ú Ö Ð Ô Ò ÒØ Ò µ Ú Ö Ð Ú Ö Ð Y Ö Ð Ø ØÓ ÇÆ ÇÊ ÅÇÊ ÜÔÐ Ò ØÓÖÝ ÓÖ Ð Ö Ò µ Ú Ö Ð Ò Ô Ò Ò ÅÈÀ ¾º ØÙ Öº Ê Ö ÓÒ Ò ÐÝ Ä Ò Ö Ó ÐÓ Ø º È Ö ÃÖ Ò Ö Ò ½ Ò ÐÝ º Ê Ö ÓÒ ØÖ ÙØ ÓÒ Ó ÇÆ ½µ Ì ÓÙØÓÑ Ù Ð µ Ú Ö Ð Ö ÔÓÒ Ö ÔÓÒ µ Ú Ö Ð Ô Ò ÒØ Ò µ Ú Ö Ð Ú Ö Ð Y Ö Ð Ø ØÓ ÇÆ ÇÊ ÅÇÊ ÜÔÐ Ò ØÓÖÝ ÓÖ Ð Ö Ò µ Ú Ö Ð Ò Ô

More information

EXAMINATION 3 VERSION B "Wage Structure, Mobility, and Discrimination" April 19, 2018

EXAMINATION 3 VERSION B Wage Structure, Mobility, and Discrimination April 19, 2018 William M. Boal Signature: Printed name: EXAMINATION 3 VERSION B "Wage Structure, Mobility, and Discrimination" April 19, 2018 INSTRUCTIONS: This exam is closed-book, closed-notes. Simple calculators are

More information