Learning Probabilistic Decision Graphs
DEFF Research Database (Denmark)
Jaeger, Manfred; Dalgaard, Jens; Silander, Tomi
2004-01-01
Probabilistic decision graphs (PDGs) are a representation language for probability distributions based on binary decision diagrams. PDGs can encode (context-specific) independence relations that cannot be captured in a Bayesian network structure, and can sometimes provide computationally more...... efficient representations than Bayesian networks. In this paper we present an algorithm for learning PDGs from data. First experiments show that the algorithm is capable of learning optimal PDG representations in some cases, and that the computational efficiency of PDG models learned from real-life data...
Cognitive biases, linguistic universals, and constraint-based grammar learning.
Culbertson, Jennifer; Smolensky, Paul; Wilson, Colin
2013-07-01
According to classical arguments, language learning is both facilitated and constrained by cognitive biases. These biases are reflected in linguistic typology-the distribution of linguistic patterns across the world's languages-and can be probed with artificial grammar experiments on child and adult learners. Beginning with a widely successful approach to typology (Optimality Theory), and adapting techniques from computational approaches to statistical learning, we develop a Bayesian model of cognitive biases and show that it accounts for the detailed pattern of results of artificial grammar experiments on noun-phrase word order (Culbertson, Smolensky, & Legendre, 2012). Our proposal has several novel properties that distinguish it from prior work in the domains of linguistic theory, computational cognitive science, and machine learning. This study illustrates how ideas from these domains can be synthesized into a model of language learning in which biases range in strength from hard (absolute) to soft (statistical), and in which language-specific and domain-general biases combine to account for data from the macro-level scale of typological distribution to the micro-level scale of learning by individuals. Copyright © 2013 Cognitive Science Society, Inc.
Machine learning a probabilistic perspective
Murphy, Kevin P
2012-01-01
Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic method...
Oflazer, K; Oflazer, Kemal; Tur, Gokhan
1996-01-01
This paper presents a constraint-based morphological disambiguation approach that is applicable languages with complex morphology--specifically agglutinative languages with productive inflectional and derivational morphological phenomena. In certain respects, our approach has been motivated by Brill's recent work, but with the observation that his transformational approach is not directly applicable to languages like Turkish. Our system combines corpus independent hand-crafted constraint rules, constraint rules that are learned via unsupervised learning from a training corpus, and additional statistical information from the corpus to be morphologically disambiguated. The hand-crafted rules are linguistically motivated and tuned to improve precision without sacrificing recall. The unsupervised learning process produces two sets of rules: (i) choose rules which choose morphological parses of a lexical item satisfying constraint effectively discarding other parses, and (ii) delete rules, which delete parses sati...
Probabilistic machine learning and artificial intelligence.
Ghahramani, Zoubin
2015-05-28
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Probabilistic machine learning and artificial intelligence
Ghahramani, Zoubin
2015-05-01
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Jarosz, Gaja
2010-06-01
This study examines the interacting roles of implicational markedness and frequency from the joint perspectives of formal linguistic theory, phonological acquisition and computational modeling. The hypothesis that child grammars are rankings of universal constraints, as in Optimality Theory (Prince & Smolensky, 1993/2004), that learning involves a gradual transition from an unmarked initial state to the target grammar, and that order of acquisition is guided by frequency, along the lines of Levelt, Schiller & Levelt (2000), is investigated. The study reviews empirical findings on syllable structure acquisition in Dutch, German, French and English, and presents novel findings on Polish. These comparisons reveal that, to the extent allowed by implicational markedness universals, frequency covaries with acquisition order across languages. From the computational perspective, the paper shows that interacting roles of markedness and frequency in a class of constraint-based phonological learning models embody this hypothesis, and their predictions are illustrated via computational simulation.
Learning Probabilistic Models of Word Sense Disambiguation
Pedersen, Ted
1998-01-01
This dissertation presents several new methods of supervised and unsupervised learning of word sense disambiguation models. The supervised methods focus on performing model searches through a space of probabilistic models, and the unsupervised methods rely on the use of Gibbs Sampling and the Expectation Maximization (EM) algorithm. In both the supervised and unsupervised case, the Naive Bayesian model is found to perform well. An explanation for this success is presented in terms of learning rates and bias-variance decompositions.
Probabilistic Sequence Learning in Mild Cognitive Impairment
Directory of Open Access Journals (Sweden)
Dezso eNemeth
2013-07-01
Full Text Available Mild Cognitive Impairment (MCI causes slight but noticeable disruption in cognitive systems, primarily executive and memory functions. However, it is not clear if the development of sequence learning is affected by an impaired cognitive system and, if so, how. The goal of our study was to investigate the development of probabilistic sequence learning, from the initial acquisition to consolidation, in MCI and healthy elderly control groups. We used the Alternating Serial Reaction Time task (ASRT to measure probabilistic sequence learning. Individuals with MCI showed weaker learning performance than the healthy elderly group. However, using the reaction times only from the second half of each learning block – after the reactivation phase - we found intact learning in MCI. Based on the assumption that the first part of each learning block is related to reactivation/recall processes, we suggest that these processes are affected in MCI. The 24-hour offline period showed no effect on sequence-specific learning in either group but did on general skill learning: the healthy elderly group showed offline improvement in general reaction times while individuals with MCI did not. Our findings deepen our understanding regarding the underlying mechanisms and time course of sequence acquisition and consolidation.
Constraint-based probabilistic learning of metabolic pathways from tomato volatiles
Gavai, A.K.; Tikunov, Y.M.; Ursem, R.A.; Bovy, A.G.; Eeuwijk, van F.A.; Nijveen, H.; Lucas, P.J.F.; Leunissen, J.A.M.
2009-01-01
Clustering and correlation analysis techniques have become popular tools for the analysis of data produced by metabolomics experiments. The results obtained from these approaches provide an overview of the interactions between objects of interest. Often in these experiments, one is more interested i
Jarosz, Gaja
2010-01-01
This study examines the interacting roles of implicational markedness and frequency from the joint perspectives of formal linguistic theory, phonological acquisition and computational modeling. The hypothesis that child grammars are rankings of universal constraints, as in Optimality Theory (Prince & Smolensky, 1993/2004), that learning involves a…
Probabilistic models and machine learning in structural bioinformatics
DEFF Research Database (Denmark)
Hamelryck, Thomas
2009-01-01
. Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis...
Language-Constraint Reachability Learning in Probabilistic Graphs
Taranto, Claudio; Esposito, Floriana
2012-01-01
The probabilistic graphs framework models the uncertainty inherent in real-world domains by means of probabilistic edges whose value quantifies the likelihood of the edge existence or the strength of the link it represents. The goal of this paper is to provide a learning method to compute the most likely relationship between two nodes in a framework based on probabilistic graphs. In particular, given a probabilistic graph we adopted the language-constraint reachability method to compute the probability of possible interconnections that may exists between two nodes. Each of these connections may be viewed as feature, or a factor, between the two nodes and the corresponding probability as its weight. Each observed link is considered as a positive instance for its corresponding link label. Given the training set of observed links a L2-regularized Logistic Regression has been adopted to learn a model able to predict unobserved link labels. The experiments on a real world collaborative filtering problem proved tha...
Probabilistic forecasting of wind power generation using extreme learning machine
DEFF Research Database (Denmark)
Wan, Can; Xu, Zhao; Pinson, Pierre
2014-01-01
an extreme learning machine (ELM)-based probabilistic forecasting method for wind power generation. To account for the uncertainties in the forecasting results, several bootstrapmethods have been compared for modeling the regression uncertainty, based on which the pairs bootstrap method is identified...... demonstrate that the proposed method is effective for probabilistic forecasting of wind power generation with a high potential for practical applications in power systems....
Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets
2015-04-24
Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Learning sparse feature representations is a useful instru- ment for solving an...novel framework for the classifi cation of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets... Learning Sparse Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Report Title Learning sparse feature representations is a useful
Baghaei, Nilufar; Mitrovic, Antonija; Irwin, Warwick
2007-01-01
We present COLLECT-UML, a constraint-based intelligent tutoring system (ITS) that teaches object-oriented analysis and design using Unified Modelling Language (UML). UML is easily the most popular object-oriented modelling technology in current practice. While teaching how to design UML class diagrams, COLLECT-UML also provides feedback on…
Baghaei, Nilufar; Mitrovic, Antonija; Irwin, Warwick
2007-01-01
We present COLLECT-UML, a constraint-based intelligent tutoring system (ITS) that teaches object-oriented analysis and design using Unified Modelling Language (UML). UML is easily the most popular object-oriented modelling technology in current practice. While teaching how to design UML class diagrams, COLLECT-UML also provides feedback on…
Learning Probabilistic Hierarchical Task Networks to Capture User Preferences
Li, Nan; Kambhampati, Subbarao; Yoon, Sungwook
2010-01-01
We propose automatically learning probabilistic Hierarchical Task Networks (pHTNs) in order to capture a user's preferences on plans, by observing only the user's behavior. HTNs are a common choice of representation for a variety of purposes in planning, including work on learning in planning. Our contributions are (a) learning structure and (b) representing preferences. In contrast, prior work employing HTNs considers learning method preconditions (instead of structure) and representing domain physics or search control knowledge (rather than preferences). Initially we will assume that the observed distribution of plans is an accurate representation of user preference, and then generalize to the situation where feasibility constraints frequently prevent the execution of preferred plans. In order to learn a distribution on plans we adapt an Expectation-Maximization (EM) technique from the discipline of (probabilistic) grammar induction, taking the perspective of task reductions as productions in a context-free...
Interference between sentence processing and probabilistic implicit sequence learning.
Directory of Open Access Journals (Sweden)
Dezso Nemeth
Full Text Available During sentence processing we decode the sequential combination of words, phrases or sentences according to previously learned rules. The computational mechanisms and neural correlates of these rules are still much debated. Other key issue is whether sentence processing solely relies on language-specific mechanisms or is it also governed by domain-general principles.In the present study, we investigated the relationship between sentence processing and implicit sequence learning in a dual-task paradigm in which the primary task was a non-linguistic task (Alternating Serial Reaction Time Task for measuring probabilistic implicit sequence learning, while the secondary task were a sentence comprehension task relying on syntactic processing. We used two control conditions: a non-linguistic one (math condition and a linguistic task (word processing task. Here we show that the sentence processing interfered with the probabilistic implicit sequence learning task, while the other two tasks did not produce a similar effect.Our findings suggest that operations during sentence processing utilize resources underlying non-domain-specific probabilistic procedural learning. Furthermore, it provides a bridge between two competitive frameworks of language processing. It appears that procedural and statistical models of language are not mutually exclusive, particularly for sentence processing. These results show that the implicit procedural system is engaged in sentence processing, but on a mechanism level, language might still be based on statistical computations.
DTI Image Registration under Probabilistic Fiber Bundles Tractography Learning
Lei, Tao; Fan, Yangyu; Zhang, Xiuwei
2016-01-01
Diffusion Tensor Imaging (DTI) image registration is an essential step for diffusion tensor image analysis. Most of the fiber bundle based registration algorithms use deterministic fiber tracking technique to get the white matter fiber bundles, which will be affected by the noise and volume. In order to overcome the above problem, we proposed a Diffusion Tensor Imaging image registration method under probabilistic fiber bundles tractography learning. Probabilistic tractography technique can more reasonably trace to the structure of the nerve fibers. The residual error estimation step in active sample selection learning is improved by modifying the residual error model using finite sample set. The calculated deformation field is then registered on the DTI images. The results of our proposed registration method are compared with 6 state-of-the-art DTI image registration methods under visualization and 3 quantitative evaluation standards. The experimental results show that our proposed method has a good comprehensive performance.
Probabilistic learning and inference in schizophrenia
Averbeck, Bruno B.; Evans, Simon; Chouhan, Viraj; Bristow, Eleanor; Shergill, Sukhwinder S.
2010-01-01
Patients with schizophrenia make decisions on the basis of less evidence when required to collect information to make an inference, a behavior often called jumping to conclusions. The underlying basis for this behaviour remains controversial. We examined the cognitive processes underpinning this finding by testing subjects on the beads task, which has been used previously to elicit jumping to conclusions behaviour, and a stochastic sequence learning task, with a similar decision theoretic structure. During the sequence learning task, subjects had to learn a sequence of button presses, while receiving noisy feedback on their choices. We fit a Bayesian decision making model to the sequence task and compared model parameters to the choice behavior in the beads task in both patients and healthy subjects. We found that patients did show a jumping to conclusions style; and those who picked early in the beads task tended to learn less from positive feedback in the sequence task. This favours the likelihood of patients selecting early because they have a low threshold for making decisions, and that they make choices on the basis of relatively little evidence. PMID:20810252
Structure Learning of Probabilistic Graphical Models: A Comprehensive Survey
Zhou, Yang
2011-01-01
Probabilistic graphical models combine the graph theory and probability theory to give a multivariate statistical modeling. They provide a unified description of uncertainty using probability and complexity using the graphical model. Especially, graphical models provide the following several useful properties: - Graphical models provide a simple and intuitive interpretation of the structures of probabilistic models. On the other hand, they can be used to design and motivate new models. - Graphical models provide additional insights into the properties of the model, including the conditional independence properties. - Complex computations which are required to perform inference and learning in sophisticated models can be expressed in terms of graphical manipulations, in which the underlying mathematical expressions are carried along implicitly. The graphical models have been applied to a large number of fields, including bioinformatics, social science, control theory, image processing, marketing analysis, amon...
Lessons learned on probabilistic methodology for precursor analyses
Energy Technology Data Exchange (ETDEWEB)
Babst, Siegfried [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Berlin (Germany); Wielenberg, Andreas; Gaenssmantel, Gerhard [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)
2016-11-15
Based on its experience in precursor assessment of operating experience from German NPP and related international activities in the field, GRS has identified areas for enhancing probabilistic methodology. These are related to improving the completeness of PSA models, to insufficiencies in probabilistic assessment approaches, and to enhancements of precursor assessment methods. Three examples from the recent practice in precursor assessments illustrating relevant methodological insights are provided and discussed in more detail. Our experience reinforces the importance of having full scope, current PSA models up to Level 2 PSA and including hazard scenarios for precursor analysis. Our lessons learned include that PSA models should be regularly updated regarding CCF data and inclusion of newly discovered CCF mechanisms or groups. Moreover, precursor classification schemes should be extended to degradations and unavailabilities of the containment function. Finally, PSA and precursor assessments should put more emphasis on the consideration of passive provisions for safety, e. g. by sensitivity cases.
Games people play: How video games improve probabilistic learning.
Schenk, Sabrina; Lech, Robert K; Suchan, Boris
2017-09-29
Recent research suggests that video game playing is associated with many cognitive benefits. However, little is known about the neural mechanisms mediating such effects, especially with regard to probabilistic categorization learning, which is a widely unexplored area in gaming research. Therefore, the present study aimed to investigate the neural correlates of probabilistic classification learning in video gamers in comparison to non-gamers. Subjects were scanned in a 3T magnetic resonance imaging (MRI) scanner while performing a modified version of the weather prediction task. Behavioral data yielded evidence for better categorization performance of video gamers, particularly under conditions characterized by stronger uncertainty. Furthermore, a post-experimental questionnaire showed that video gamers had acquired higher declarative knowledge about the card combinations and the related weather outcomes. Functional imaging data revealed for video gamers stronger activation clusters in the hippocampus, the precuneus, the cingulate gyrus and the middle temporal gyrus as well as in occipital visual areas and in areas related to attentional processes. All these areas are connected with each other and represent critical nodes for semantic memory, visual imagery and cognitive control. Apart from this, and in line with previous studies, both groups showed activation in brain areas that are related to attention and executive functions as well as in the basal ganglia and in memory-associated regions of the medial temporal lobe. These results suggest that playing video games might enhance the usage of declarative knowledge as well as hippocampal involvement and enhances overall learning performance during probabilistic learning. In contrast to non-gamers, video gamers showed better categorization performance, independently of the uncertainty of the condition. Copyright © 2017 Elsevier B.V. All rights reserved.
Machine learning, computer vision, and probabilistic models in jet physics
CERN. Geneva; NACHMAN, Ben
2015-01-01
In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...
Biases in probabilistic category learning in relation to social anxiety
Directory of Open Access Journals (Sweden)
Anna eAbraham
2015-08-01
Full Text Available Instrumental learning paradigms are rarely employed to investigate the mechanisms underlying acquired fear responses in social anxiety. Here, we adapted a probabilistic category learning paradigm to assess information processing biases as a function of the degree of social anxiety traits in a sample of healthy individuals without a diagnosis of social phobia. Participants were presented with three pairs of neutral faces with differing probabilistic accuracy contingencies (A/B: 80/20, C/D: 70/30, E/F: 60/40. Upon making their choice, negative and positive feedback was conveyed using angry and happy faces, respectively. The highly socially anxious group showed a strong tendency to be more accurate at learning the probability contingency associated with the most ambiguous stimulus pair (E/F: 60/40. Moreover, when pairing the most positively reinforced stimulus or the most negatively reinforced stimulus with all the other stimuli in a test phase, the highly socially anxious group avoided the most negatively reinforced stimulus significantly more than the control group. The results are discussed with reference to avoidance learning and hypersensitivity to negative social evaluative information associated with social anxiety.
Stimulus discriminability may bias value-based probabilistic learning
Slagter, Heleen A.; Collins, Anne G. E.; Frank, Michael J.; Kenemans, J. Leon
2017-01-01
Reinforcement learning tasks are often used to assess participants’ tendency to learn more from the positive or more from the negative consequences of one’s action. However, this assessment often requires comparison in learning performance across different task conditions, which may differ in the relative salience or discriminability of the stimuli associated with more and less rewarding outcomes, respectively. To address this issue, in a first set of studies, participants were subjected to two versions of a common probabilistic learning task. The two versions differed with respect to the stimulus (Hiragana) characters associated with reward probability. The assignment of character to reward probability was fixed within version but reversed between versions. We found that performance was highly influenced by task version, which could be explained by the relative perceptual discriminability of characters assigned to high or low reward probabilities, as assessed by a separate discrimination experiment. Participants were more reliable in selecting rewarding characters that were more discriminable, leading to differences in learning curves and their sensitivity to reward probability. This difference in experienced reinforcement history was accompanied by performance biases in a test phase assessing ability to learn from positive vs. negative outcomes. In a subsequent large-scale web-based experiment, this impact of task version on learning and test measures was replicated and extended. Collectively, these findings imply a key role for perceptual factors in guiding reward learning and underscore the need to control stimulus discriminability when making inferences about individual differences in reinforcement learning. PMID:28481915
Stimulus discriminability may bias value-based probabilistic learning.
Schutte, Iris; Slagter, Heleen A; Collins, Anne G E; Frank, Michael J; Kenemans, J Leon
2017-01-01
Reinforcement learning tasks are often used to assess participants' tendency to learn more from the positive or more from the negative consequences of one's action. However, this assessment often requires comparison in learning performance across different task conditions, which may differ in the relative salience or discriminability of the stimuli associated with more and less rewarding outcomes, respectively. To address this issue, in a first set of studies, participants were subjected to two versions of a common probabilistic learning task. The two versions differed with respect to the stimulus (Hiragana) characters associated with reward probability. The assignment of character to reward probability was fixed within version but reversed between versions. We found that performance was highly influenced by task version, which could be explained by the relative perceptual discriminability of characters assigned to high or low reward probabilities, as assessed by a separate discrimination experiment. Participants were more reliable in selecting rewarding characters that were more discriminable, leading to differences in learning curves and their sensitivity to reward probability. This difference in experienced reinforcement history was accompanied by performance biases in a test phase assessing ability to learn from positive vs. negative outcomes. In a subsequent large-scale web-based experiment, this impact of task version on learning and test measures was replicated and extended. Collectively, these findings imply a key role for perceptual factors in guiding reward learning and underscore the need to control stimulus discriminability when making inferences about individual differences in reinforcement learning.
Learning to Estimate Dynamical State with Probabilistic Population Codes.
Directory of Open Access Journals (Sweden)
Joseph G Makin
2015-11-01
Full Text Available Tracking moving objects, including one's own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF, the parameters of which can be learned via latent-variable density estimation (the EM algorithm. The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, "probabilistic population codes." We show that a recurrent neural network-a modified form of an exponential family harmonium (EFH-that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.
Online probabilistic learning with an ensemble of forecasts
Thorey, Jean; Mallet, Vivien; Chaussin, Christophe
2016-04-01
Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).
Directory of Open Access Journals (Sweden)
Arnaud Gotlieb
2013-02-01
Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.
A Probabilistic Approach for Learning Folksonomies from Structured Data
Plangprasopchok, Anon; Getoor, Lise
2010-01-01
Learning structured representations has emerged as an important problem in many domains, including document and Web data mining, bioinformatics, and image analysis. One approach to learning complex structures is to integrate many smaller, incomplete and noisy structure fragments. In this work, we present an unsupervised probabilistic approach that extends affinity propagation to combine the small ontological fragments into a collection of integrated, consistent, and larger folksonomies. This is a challenging task because the method must aggregate similar structures while avoiding structural inconsistencies and handling noise. We validate the approach on a real-world social media dataset, comprised of shallow personal hierarchies specified by many individual users, collected from the photosharing website Flickr. Our empirical results show that our proposed approach is able to construct deeper and denser structures, compared to an approach using only the standard affinity propagation algorithm. Additionally, th...
Institute of Scientific and Technical Information of China (English)
WANG Ke; HUANG Zhi; ZHONG Zhihua
2014-01-01
Due to the large variations of environment with ever-changing background and vehicles with different shapes, colors and appearances, to implement a real-time on-board vehicle recognition system with high adaptability, efficiency and robustness in complicated environments, remains challenging. This paper introduces a simultaneous detection and tracking framework for robust on-board vehicle recognition based on monocular vision technology. The framework utilizes a novel layered machine learning and particle filter to build a multi-vehicle detection and tracking system. In the vehicle detection stage, a layered machine learning method is presented, which combines coarse-search and fine-search to obtain the target using the AdaBoost-based training algorithm. The pavement segmentation method based on characteristic similarity is proposed to estimate the most likely pavement area. Efficiency and accuracy are enhanced by restricting vehicle detection within the downsized area of pavement. In vehicle tracking stage, a multi-objective tracking algorithm based on target state management and particle filter is proposed. The proposed system is evaluated by roadway video captured in a variety of traffics, illumination, and weather conditions. The evaluating results show that, under conditions of proper illumination and clear vehicle appearance, the proposed system achieves 91.2% detection rate and 2.6% false detection rate. Experiments compared to typical algorithms show that, the presented algorithm reduces the false detection rate nearly by half at the cost of decreasing 2.7%–8.6% detection rate. This paper proposes a multi-vehicle detection and tracking system, which is promising for implementation in an on-board vehicle recognition system with high precision, strong robustness and low computational cost.
Case-Based Reasoning for Explaining Probabilistic Machine Learning
Directory of Open Access Journals (Sweden)
T omas Olsson
2014-04-01
Full Text Available This paper describes a generic fram e w ork for e xplaining the prediction of probabilistic machine learning algorithms using cases. The fram e w ork consists of t w o components: a similarity metric between cases th at is defined relat i v e to a probability model and an n ov el case - based approach to justifying the probabilistic prediction by estimating the prediction error using case - based reasoning. As basis for der i ving similarity metrics, we define similarity in terms of the principle of inte r c han g eability that t w o cases are considered similar or identical if t w o probability distri b utions, der i v ed from e xcluding either one or the other case in the case base, are identical. Lastl y , we sh o w the applicability of the propo sed approach by der i ving a metric for linear r e gression, and apply the proposed approach for e xplaining predictions of the ene r gy performance of households
The Sense of Confidence during Probabilistic Learning: A Normative Account.
Directory of Open Access Journals (Sweden)
Florent Meyniel
2015-06-01
Full Text Available Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable "feeling of knowing" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics and at the second level (uncertainty due to unexpected changes in these stochastic characteristics. Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems
The Sense of Confidence during Probabilistic Learning: A Normative Account
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-01-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core
Learning probabilistic features for robotic navigation using laser sensors.
Directory of Open Access Journals (Sweden)
Fidel Aznar
Full Text Available SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N to O(N(2, where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.
Learning Probabilistic Features for Robotic Navigation Using Laser Sensors
Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José
2014-01-01
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377
Probabilistic motor sequence yields greater offline and less online learning than fixed sequence
Directory of Open Access Journals (Sweden)
Yue eDu
2016-03-01
Full Text Available It is well acknowledged that motor sequences can be learned quickly through online learning. Subsequently, the initial acquisition of a motor sequence is boosted or consolidated by offline learning. However, little is known whether offline learning can drive the fast learning of motor sequences (i.e., initial sequence learning in the first training session. To examine offline learning in the fast learning stage, we asked four groups of young adults to perform the serial reaction time (SRT task with either a fixed or probabilistic sequence and with or without preliminary knowledge of the presence of a sequence. The sequence and instruction types were manipulated to emphasize either procedural (probabilistic sequence; no preliminary knowledge or declarative (fixed sequence; with preliminary knowledge memory that were found to either facilitate or inhibit offline learning. In the SRT task, there were six learning blocks with a two-minute break between each consecutive block. Throughout the session, stimuli followed the same fixed or probabilistic pattern except in Block 5, in which stimuli appeared in a random order. We found that preliminary knowledge facilitated the learning of a fixed sequence, but not a probabilistic sequence. In addition to overall learning measured by the mean reaction time (RT, we examined the progressive changes in RT within and between blocks (i.e., online and offline learning, respectively. It was found that the two groups who performed the fixed sequence, regardless of preliminary knowledge, showed greater online learning than the other two groups who performed the probabilistic sequence. The groups who performed the probabilistic sequence, regardless of preliminary knowledge, did not display online learning, as indicated by a decline in performance within the learning blocks. However, they did demonstrate remarkably greater offline improvement in RT, which suggests that they are learning the probabilistic sequence
Sari, Dwi Ivayana; Hermanto, Didik
2017-08-01
This research is a developmental research of probabilistic thinking-oriented learning tools for probability materials at ninth grade students. This study is aimed to produce a good probabilistic thinking-oriented learning tools. The subjects were IX-A students of MTs Model Bangkalan. The stages of this development research used 4-D development model which has been modified into define, design and develop. Teaching learning tools consist of lesson plan, students' worksheet, learning teaching media and students' achievement test. The research instrument used was a sheet of learning tools validation, a sheet of teachers' activities, a sheet of students' activities, students' response questionnaire and students' achievement test. The result of those instruments were analyzed descriptively to answer research objectives. The result was teaching learning tools in which oriented to probabilistic thinking of probability at ninth grade students which has been valid. Since teaching and learning tools have been revised based on validation, and after experiment in class produced that teachers' ability in managing class was effective, students' activities were good, students' responses to the learning tools were positive and the validity, sensitivity and reliability category toward achievement test. In summary, this teaching learning tools can be used by teacher to teach probability for develop students' probabilistic thinking.
Samanez-Larkin, Gregory R; Levens, Sara M; Perry, Lee M; Dougherty, Robert F; Knutson, Brian
2012-04-11
Frontostriatal circuits have been implicated in reward learning, and emerging findings suggest that frontal white matter structural integrity and probabilistic reward learning are reduced in older age. This cross-sectional study examined whether age differences in frontostriatal white matter integrity could account for age differences in reward learning in a community life span sample of human adults. By combining diffusion tensor imaging with a probabilistic reward learning task, we found that older age was associated with decreased reward learning and decreased white matter integrity in specific pathways running from the thalamus to the medial prefrontal cortex and from the medial prefrontal cortex to the ventral striatum. Further, white matter integrity in these thalamocorticostriatal paths could statistically account for age differences in learning. These findings suggest that the integrity of frontostriatal white matter pathways critically supports reward learning. The findings also raise the possibility that interventions that bolster frontostriatal integrity might improve reward learning and decision making.
Probabilistic Universal Learning Networks and their Applications to Nonlinear Control Systems
1998-01-01
Probabilistic Universal Learning Networks (PrULNs) are proposed, which are learning networks with a capability of dealing with stochastic signals. PrULNs are extensions of Universal Learning Networks (ULNs). ULNs form a superset of neural networks and were proposed to provide a universal framework for modeling and control of nonlinear large-scale complex systems. A generalized learning algorithm has been devised for ULNs which can also be used in a unified manner for almost all kinds of learn...
The Gain-Loss Model: A Probabilistic Skill Multimap Model for Assessing Learning Processes
Robusto, Egidio; Stefanutti, Luca; Anselmi, Pasquale
2010-01-01
Within the theoretical framework of knowledge space theory, a probabilistic skill multimap model for assessing learning processes is proposed. The learning process of a student is modeled as a function of the student's knowledge and of an educational intervention on the attainment of specific skills required to solve problems in a knowledge…
Distinct Roles of Dopamine and Subthalamic Nucleus in Learning and Probabilistic Decision Making
Coulthard, Elizabeth J.; Bogacz, Rafal; Javed, Shazia; Mooney, Lucy K.; Murphy, Gillian; Keeley, Sophie; Whone, Alan L.
2012-01-01
Even simple behaviour requires us to make decisions based on combining multiple pieces of learned and new information. Making such decisions requires both learning the optimal response to each given stimulus as well as combining probabilistic information from multiple stimuli before selecting a response. Computational theories of decision making…
The Gain-Loss Model: A Probabilistic Skill Multimap Model for Assessing Learning Processes
Robusto, Egidio; Stefanutti, Luca; Anselmi, Pasquale
2010-01-01
Within the theoretical framework of knowledge space theory, a probabilistic skill multimap model for assessing learning processes is proposed. The learning process of a student is modeled as a function of the student's knowledge and of an educational intervention on the attainment of specific skills required to solve problems in a knowledge…
Scalable learning of probabilistic latent models for collaborative filtering
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre
2015-01-01
Collaborative filtering has emerged as a popular way of making user recommendations, but with the increasing sizes of the underlying databases scalability is becoming a crucial issue. In this paper we focus on a recently proposed probabilistic collaborative filtering model that explicitly...
Probabilistic Forecasting of Traffic Flow Using Multikernel Based Extreme Learning Machine
Directory of Open Access Journals (Sweden)
Yiming Xing
2017-01-01
Full Text Available Real-time and accurate prediction of traffic flow is the key to intelligent transportation systems (ITS. However, due to the nonstationarity of traffic flow data, traditional point forecasting can hardly be accurate, so probabilistic forecasting methods are essential for quantification of the potential risks and uncertainties for traffic management. A probabilistic forecasting model of traffic flow based on a multikernel extreme learning machine (MKELM is proposed. Moreover, the optimal output weights of MKELM are obtained by utilizing Quantum-behaved particle swarm optimization (QPSO algorithm. To verify its effectiveness, traffic flow probabilistic prediction using QPSO-MKELM was compared with other learning methods. Experimental results show that QPSO-MKELM is more effective for practical applications. And it will help traffic managers to make right decisions.
Model Learning for Probabilistic Simulation on Rare Events and Scenarios
2015-03-06
observed data only. This is a problem called covariate shift in statistics . We need to calibrate the probability to avoid the underestimation of our...fall records causing the events/scenarios. This is called a covariate shift problem in statistics . We need to calibrate the probability to avoid the...their causes and consequences. Their probabilities are also quantitatively provided based on the mathematically rigorous and probabilistic inference
Skilleter, Ashley Jayne; Weickert, Cynthia Shannon; Moustafa, Ahmed Abdelhalim; Gendy, Rasha; Chan, Mico; Arifin, Nur; Mitchell, Philip Bowden; Weickert, Thomas Wesley
2014-11-01
The brain derived neurotrophic factor (BDNF) val66met polymorphism rs6265 influences learning and may represent a risk factor for schizophrenia. Healthy people with high schizotypal personality traits display cognitive deficits that are similar to but not as severe as those observed in schizophrenia and they can be studied without confounds of antipsychotics or chronic illness. How genetic variation in BDNF may impact learning in individuals falling along the schizophrenia spectrum is unknown. We predicted that schizotypal personality traits would influence learning and that schizotypal personality-based differences in learning would vary depending on the BDNF val66met genotype. Eighty-nine healthy adults completed the Schizotypal Personality Questionnaire (SPQ) and a probabilistic association learning test. Blood samples were genotyped for the BDNF val66met polymorphism. An ANOVA was performed with BDNF genotype (val homozygotes and met-carriers) and SPQ score (high/low) as grouping variables and probabilistic association learning as the dependent variable. Participants with low SPQ scores (fewer schizotypal personality traits) showed significantly better learning than those with high SPQ scores. BDNF met-carriers displaying few schizotypal personality traits performed best, whereas BDNF met-carriers displaying high schizotypal personality traits performed worst. Thus, the BDNF val66met polymorphism appears to influence probabilistic association learning differently depending on the extent of schizotypal personality traits displayed. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Sujadi, Imam; Kurniasih, Rini; Subanti, Sri
2017-05-01
In the era of 21st century learning, it needs to use technology as a learning media. Using Edmodo as a learning media is one of the options as the complement in learning process. However, this research focuses on the effectiveness of learning material using Edmodo. The aim of this research to determine whether the level of student's probabilistic thinking that use learning material with Edmodo is better than the existing learning materials (books) implemented to teach the subject of students grade 8th. This is quasi-experimental research using control group pretest and posttest. The population of this study was students grade 8 of SMPN 12 Surakarta and the sampling technique used random sampling. The analysis technique used to examine two independent sample using Kolmogorov-Smirnov test. The obtained value of test statistic is M=0.38, since 0.38 is the largest tabled critical one-tailed value M0.05=0.011. The result of the research is the learning materials with Edmodo more effectively to enhance the level of probabilistic thinking learners than the learning that use the existing learning materials (books). Therefore, learning material using Edmodo can be used in learning process. It can also be developed into another learning material through Edmodo.
Chase, H.W.; Swainson, R.; Durham, L.; Benham, L.; Cools, R.
2011-01-01
We assessed electrophysiological activity over the medial frontal cortex (MFC) during outcome-based behavioral adjustment using a probabilistic reversal learning task. During recording, participants were presented two abstract visual patterns on each trial and had to select the stimulus rewarded on
Hammerer, Dorothea; Li, Shu-Chen; Muller, Viktor; Lindenberger, Ulman
2011-01-01
By recording the feedback-related negativity (FRN) in response to gains and losses, we investigated the contribution of outcome monitoring mechanisms to age-associated differences in probabilistic reinforcement learning. Specifically, we assessed the difference of the monitoring reactions to gains and losses to investigate the monitoring of…
Strategies in probabilistic feedback learning in Parkinson patients OFF medication.
Bellebaum, C; Kobza, S; Ferrea, S; Schnitzler, A; Pollok, B; Südmeyer, M
2016-04-21
Studies on classification learning suggested that altered dopamine function in Parkinson's Disease (PD) specifically affects learning from feedback. In patients OFF medication, enhanced learning from negative feedback has been described. This learning bias was not seen in observational learning from feedback, indicating different neural mechanisms for this type of learning. The present study aimed to compare the acquisition of stimulus-response-outcome associations in PD patients OFF medication and healthy control subjects in active and observational learning. 16 PD patients OFF medication and 16 controls were examined with three parallel learning tasks each, two feedback-based (active and observational) and one non-feedback-based paired associates task. No acquisition deficit was seen in the patients for any of the tasks. More detailed analyses on the learning strategies did, however, reveal that the patients showed more lose-shift responses during active feedback learning than controls, and that lose-shift and win-stay responses more strongly determined performance accuracy in patients than controls. For observational feedback learning, the performance of both groups correlated similarly with the performance in non-feedback-based paired associates learning and with the accuracy of observed performance. Also, patients and controls showed comparable evidence of feedback processing in observational learning. In active feedback learning, PD patients use alternative learning strategies than healthy controls. Analyses on observational learning did not yield differences between patients and controls, adding to recent evidence of a differential role of the human striatum in active and observational learning from feedback.
DEFF Research Database (Denmark)
Mølgaard, Lasse Lohilahti; Buus, Ole Thomsen; Larsen, Jan
2017-01-01
We present a data-driven machine learning approach to detect drug- and explosives-precursors using colorimetric sensor technology for air-sampling. The sensing technology has been developed in the context of the CRIM-TRACK project. At present a fully- integrated portable prototype for air sampling...... of the highly multi-variate data produced from the colorimetric chip a number of machine learning techniques are employed to provide reliable classification of target analytes from confounders found in the air streams. We demonstrate that a data-driven machine learning method using dimensionality reduction...... in combination with a probabilistic classifier makes it possible to produce informative features and a high detection rate of analytes. Furthermore, the probabilistic machine learning approach provides a means of automatically identifying unreliable measurements that could produce false predictions...
Exner, Cornelia; Zetsche, Ulrike; Lincoln, Tania M; Rief, Winfried
2014-03-01
A tendency to overestimate threat has been shown in individuals with OCD. We tested the hypothesis that this bias in judgment is related to difficulties in learning probabilistic associations between events. Thirty participants with OCD and 30 matched healthy controls completed a learning experiment involving 2 variants of a probabilistic classification learning task. In the neutral weather-prediction task, rainy and sunny weather had to be predicted. In the emotional task danger of an epidemic from virus infection had to be predicted (epidemic-prediction task). Participants with OCD were as able as controls to improve their prediction of neutral events across learning trials but scored significantly below healthy controls on the epidemic-prediction task. Lower performance on the emotional task variant was significantly related to a heightened tendency to overestimate threat. Biased information processing in OCD might thus hamper corrective experiences regarding the probability of threatening events.
Learning a Probabilistic Topology Discovering Model for Scene Categorization.
Zhang, Luming; Ji, Rongrong; Xia, Yingjie; Zhang, Ying; Li, Xuelong
2015-08-01
A recent advance in scene categorization prefers a topological based modeling to capture the existence and relationships among different scene components. To that effect, local features are typically used to handle photographing variances such as occlusions and clutters. However, in many cases, the local features alone cannot well capture the scene semantics since they are extracted from tiny regions (e.g., 4×4 patches) within an image. In this paper, we mine a discriminative topology and a low-redundant topology from the local descriptors under a probabilistic perspective, which are further integrated into a boosting framework for scene categorization. In particular, by decomposing a scene image into basic components, a graphlet model is used to describe their spatial interactions. Accordingly, scene categorization is formulated as an intergraphlet matching problem. The above procedure is further accelerated by introducing a probabilistic based representative topology selection scheme that makes the pairwise graphlet comparison trackable despite their exponentially increasing volumes. The selected graphlets are highly discriminative and independent, characterizing the topological characteristics of scene images. A weak learner is subsequently trained for each topology, which are boosted together to jointly describe the scene image. In our experiment, the visualized graphlets demonstrate that the mined topological patterns are representative to scene categories, and our proposed method beats state-of-the-art models on five popular scene data sets.
Serb, Alexander; Bill, Johannes; Khiat, Ali; Berdan, Radu; Legenstein, Robert; Prodromakis, Themis
2016-09-01
In an increasingly data-rich world the need for developing computing systems that cannot only process, but ideally also interpret big data is becoming continuously more pressing. Brain-inspired concepts have shown great promise towards addressing this need. Here we demonstrate unsupervised learning in a probabilistic neural network that utilizes metal-oxide memristive devices as multi-state synapses. Our approach can be exploited for processing unlabelled data and can adapt to time-varying clusters that underlie incoming data by supporting the capability of reversible unsupervised learning. The potential of this work is showcased through the demonstration of successful learning in the presence of corrupted input data and probabilistic neurons, thus paving the way towards robust big-data processors.
Serb, Alexander; Bill, Johannes; Khiat, Ali; Berdan, Radu; Legenstein, Robert; Prodromakis, Themis
2016-01-01
In an increasingly data-rich world the need for developing computing systems that cannot only process, but ideally also interpret big data is becoming continuously more pressing. Brain-inspired concepts have shown great promise towards addressing this need. Here we demonstrate unsupervised learning in a probabilistic neural network that utilizes metal-oxide memristive devices as multi-state synapses. Our approach can be exploited for processing unlabelled data and can adapt to time-varying clusters that underlie incoming data by supporting the capability of reversible unsupervised learning. The potential of this work is showcased through the demonstration of successful learning in the presence of corrupted input data and probabilistic neurons, thus paving the way towards robust big-data processors. PMID:27681181
Medication Impairs Probabilistic Classification Learning in Parkinson's Disease
Jahanshahi, Marjan; Wilkinson, Leonora; Gahir, Harpreet; Dharminda, Angeline; Lagnado, David A.
2010-01-01
In Parkinson's disease (PD), it is possible that tonic increase of dopamine associated with levodopa medication overshadows phasic release of dopamine, which is essential for learning. Thus while the motor symptoms of PD are improved with levodopa medication, learning would be disrupted. To test this hypothesis, we investigated the effect of…
Human-level concept learning through probabilistic program induction.
Lake, Brenden M; Salakhutdinov, Ruslan; Tenenbaum, Joshua B
2015-12-11
People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.
Pajak, Bozena; Fine, Alex B; Kleinschmidt, Dave F; Jaeger, T Florian
2016-12-01
We present a framework of second and additional language (L2/Ln) acquisition motivated by recent work on socio-indexical knowledge in first language (L1) processing. The distribution of linguistic categories covaries with socio-indexical variables (e.g., talker identity, gender, dialects). We summarize evidence that implicit probabilistic knowledge of this covariance is critical to L1 processing, and propose that L2/Ln learning uses the same type of socio-indexical information to probabilistically infer latent hierarchical structure over previously learned and new languages. This structure guides the acquisition of new languages based on their inferred place within that hierarchy, and is itself continuously revised based on new input from any language. This proposal unifies L1 processing and L2/Ln acquisition as probabilistic inference under uncertainty over socio-indexical structure. It also offers a new perspective on crosslinguistic influences during L2/Ln learning, accommodating gradient and continued transfer (both negative and positive) from previously learned to novel languages, and vice versa.
Tsuchida, Ami; Doll, Bradley B; Fellows, Lesley K
2010-12-15
Damage to the orbitofrontal cortex (OFC) has been linked to impaired reinforcement processing and maladaptive behavior in changing environments across species. Flexible stimulus-outcome learning, canonically captured by reversal learning tasks, has been shown to rely critically on OFC in rats, monkeys, and humans. However, the precise role of OFC in this learning remains unclear. Furthermore, whether other frontal regions also contribute has not been definitively established, particularly in humans. In the present study, a reversal learning task with probabilistic feedback was administered to 39 patients with focal lesions affecting various sectors of the frontal lobes and to 51 healthy, demographically matched control subjects. Standard groupwise comparisons were supplemented with voxel-based lesion-symptom mapping to identify regions within the frontal lobes critical for task performance. Learning in this dynamic stimulus-reinforcement environment was considered both in terms of overall performance and at the trial-by-trial level. In this challenging, probabilistic context, OFC damage disrupted both initial and reversal learning. Trial-by-trial performance patterns suggest that OFC plays a critical role in interpreting feedback from a particular trial within the broader context of the outcome history across trials rather than in simply suppressing preexisting stimulus-outcome associations. The findings show that OFC, and not other prefrontal regions, plays a necessary role in flexible stimulus-reinforcement learning in humans.
Learned graphical models for probabilistic planning provide a new class of movement primitives.
Rückert, Elmar A; Neumann, Gerhard; Toussaint, Marc; Maass, Wolfgang
2012-01-01
BIOLOGICAL MOVEMENT GENERATION COMBINES THREE INTERESTING ASPECTS: its modular organization in movement primitives (MPs), its characteristics of stochastic optimality under perturbations, and its efficiency in terms of learning. A common approach to motor skill learning is to endow the primitives with dynamical systems. Here, the parameters of the primitive indirectly define the shape of a reference trajectory. We propose an alternative MP representation based on probabilistic inference in learned graphical models with new and interesting properties that complies with salient features of biological movement control. Instead of endowing the primitives with dynamical systems, we propose to endow MPs with an intrinsic probabilistic planning system, integrating the power of stochastic optimal control (SOC) methods within a MP. The parameterization of the primitive is a graphical model that represents the dynamics and intrinsic cost function such that inference in this graphical model yields the control policy. We parameterize the intrinsic cost function using task-relevant features, such as the importance of passing through certain via-points. The system dynamics as well as intrinsic cost function parameters are learned in a reinforcement learning (RL) setting. We evaluate our approach on a complex 4-link balancing task. Our experiments show that our movement representation facilitates learning significantly and leads to better generalization to new task settings without re-learning.
Event simultaneity does not eliminate age deficits in implicit probabilistic sequence learning.
Forman-Alberti, Alissa B; Seaman, Kendra L; Howard, Darlene V; Howard, James H
2014-01-01
Recent studies have shown age-related deficits in learning subtle probabilistic sequential relationships. However, virtually all sequence learning studies have displayed successive events one at a time. Here we used a modified Triplets Learning Task to investigate if an age deficit occurs even when sequentially-presented predictive events remain in view simultaneously. Twelve young and 12 old adults observed two cue events and responded to a target event on each of a series of trials. All three events remained in view until the subject responded. Unbeknownst to participants, the first cue predicted one of four targets on 80% of the trials. Learning was indicated by faster and more accurate responding to these high-probability targets than to low-probability targets. Results revealed age deficits in sequence learning even with this simultaneous display, suggesting that age differences are not due solely to general processing declines, but rather reflect an age-related deficit in associative learning.
Serb, Alexander; Bill, Johannes; Khiat, Ali; Berdan, Radu; Legenstein, Robert; Prodromakis, Themis
2016-01-01
In an increasingly data-rich world the need for developing computing systems that cannot only process, but ideally also interpret big data is becoming continuously more pressing. Brain-inspired concepts have shown great promise towards addressing this need. Here we demonstrate unsupervised learning in a probabilistic neural network that utilizes metal-oxide memristive devices as multi-state synapses. Our approach can be exploited for processing unlabelled data and can adapt to time-varying cl...
Gabay, Yafit; Goldfarb, Liat
2017-07-01
Although Attention-Deficit Hyperactivity Disorder (ADHD) is closely linked to executive function deficits, it has recently been attributed to procedural learning impairments that are quite distinct from the former. These observations challenge the ability of the executive function framework solely to account for the diverse range of symptoms observed in ADHD. A recent neurocomputational model emphasizes the role of striatal dopamine (DA) in explaining ADHD's broad range of deficits, but the link between this model and procedural learning impairments remains unclear. Significantly, feedback-based procedural learning is hypothesized to be disrupted in ADHD because of the involvement of striatal DA in this type of learning. In order to test this assumption, we employed two variants of a probabilistic category learning task known from the neuropsychological literature. Feedback-based (FB) and paired associate-based (PA) probabilistic category learning were employed in a non-medicated sample of ADHD participants and neurotypical participants. In the FB task, participants learned associations between cues and outcomes initially by guessing and subsequently through feedback indicating the correctness of the response. In the PA learning task, participants viewed the cue and its associated outcome simultaneously without receiving an overt response or corrective feedback. In both tasks, participants were trained across 150 trials. Learning was assessed in a subsequent test without a presentation of the outcome or corrective feedback. Results revealed an interesting disassociation in which ADHD participants performed as well as control participants in the PA task, but were impaired compared with the controls in the FB task. The learning curve during FB training differed between the two groups. Taken together, these results suggest that the ability to incrementally learn by feedback is selectively disrupted in ADHD participants. These results are discussed in relation to both
Approaches to probabilistic model learning for mobile manipulation robots
Sturm, Jürgen
2013-01-01
Mobile manipulation robots are envisioned to provide many useful services both in domestic environments as well as in the industrial context. Examples include domestic service robots that implement large parts of the housework, and versatile industrial assistants that provide automation, transportation, inspection, and monitoring services. The challenge in these applications is that the robots have to function under changing, real-world conditions, be able to deal with considerable amounts of noise and uncertainty, and operate without the supervision of an expert. This book presents novel learning techniques that enable mobile manipulation robots, i.e., mobile platforms with one or more robotic manipulators, to autonomously adapt to new or changing situations. The approaches presented in this book cover the following topics: (1) learning the robot's kinematic structure and properties using actuation and visual feedback, (2) learning about articulated objects in the environment in which the robot is operating,...
Probabilistic Short-Term Wind Power Forecasting Using Sparse Bayesian Learning and NWP
Directory of Open Access Journals (Sweden)
Kaikai Pan
2015-01-01
Full Text Available Probabilistic short-term wind power forecasting is greatly significant for the operation of wind power scheduling and the reliability of power system. In this paper, an approach based on Sparse Bayesian Learning (SBL and Numerical Weather Prediction (NWP for probabilistic wind power forecasting in the horizon of 1–24 hours was investigated. In the modeling process, first, the wind speed data from NWP results was corrected, and then the SBL was used to build a relationship between the combined data and the power generation to produce probabilistic power forecasts. Furthermore, in each model, the application of SBL was improved by using modified-Gaussian kernel function and parameters optimization through Particle Swarm Optimization (PSO. To validate the proposed approach, two real-world datasets were used for construction and testing. For deterministic evaluation, the simulation results showed that the proposed model achieves a greater improvement in forecasting accuracy compared with other wind power forecast models. For probabilistic evaluation, the results of indicators also demonstrate that the proposed model has an outstanding performance.
Learning probabilistic models of connectivity from multiple spike train data
2010-01-01
Neuronal circuits or cell assemblies carry out brain function through complex coordinated firing patterns [1]. Inferring topology of neuronal circuits from simultaneously recorded spike train data is a challenging problem in neuroscience. In this work we present a new class of dynamic Bayesian networks to infer polysynaptic excitatory connectivity between spiking cortical neurons [2]. The emphasis on excitatory networks allows us to learn connectivity models by exploiting fast data mining alg...
Effects of positive mood on probabilistic learning: behavioral and electrophysiological correlates.
Bakic, Jasmina; Jepma, Marieke; De Raedt, Rudi; Pourtois, Gilles
2014-12-01
Whether positive mood can change reinforcement learning or not remains an open question. In this study, we used a probabilistic learning task and explored whether positive mood could alter the way positive versus negative feedback was used to guide learning. This process was characterized both at the behavioral and electro-encephalographic levels. Thirty two participants were randomly allocated either to a positive or a neutral (control) mood condition. Behavioral results showed that while learning performance was balanced between the two groups, participants in the positive mood group had a higher learning rate than participants in the neutral mood group. At the electrophysiological level, we found that positive mood increased the error-related negativity when the stimulus-response associations were deterministic, selectively (as opposed to random or probabilistic). However, it did not influence the feedback-related negativity. These new findings are discussed in terms of an enhanced internal reward prediction error signal after the induction of positive mood when the probability of getting a reward is high.
A Comprehensive Probabilistic Framework to Learn Air Data from Surface Pressure Measurements
Directory of Open Access Journals (Sweden)
Ankur Srivastava
2015-01-01
Full Text Available Use of probabilistic techniques has been demonstrated to learn air data parameters from surface pressure measurements. Integration of numerical models with wind tunnel data and sequential experiment design of wind tunnel runs has been demonstrated in the calibration of a flush air data sensing anemometer system. Development and implementation of a metamodeling method, Sequential Function Approximation (SFA, are presented which lies at the core of the discussed probabilistic framework. SFA is presented as a tool capable of nonlinear statistical inference, uncertainty reduction by fusion of data with physical models of variable fidelity, and sequential experiment design. This work presents the development and application of these tools in the calibration of FADS for a Runway Assisted Landing Site (RALS control tower. However, the multidisciplinary nature of this work is general in nature and is potentially applicable to a variety of mechanical and aerospace engineering problems.
Gabay, Yafit; Vakil, Eli; Schiff, Rachel; Holt, Lori L.
2015-01-01
Objective Developmental dyslexia is presumed to arise from specific phonological impairments. However, an emerging theoretical framework suggests that phonological impairments may be symptoms stemming from an underlying dysfunction of procedural learning. Method We tested procedural learning in adults with dyslexia (n=15) and matched-controls (n=15) using two versions of the Weather Prediction Task: Feedback (FB) and Paired-associate (PA). In the FB-based task, participants learned associations between cues and outcomes initially by guessing and subsequently through feedback indicating the correctness of response. In the PA-based learning task, participants viewed the cue and its associated outcome simultaneously without overt response or feedback. In both versions, participants trained across 150 trials. Learning was assessed in a subsequent test without presentation of the outcome, or corrective feedback. Results The Dyslexia group exhibited impaired learning compared with the Control group on both the FB and PA versions of the weather prediction task. Conclusions The results indicate that the ability to learn by feedback is not selectively impaired in dyslexia. Rather it seems that the probabilistic nature of the task, shared by the FB and PA versions of the weather prediction task, hampers learning in those with dyslexia. Results are discussed in light of procedural learning impairments among participants with dyslexia. PMID:25730732
Energy Technology Data Exchange (ETDEWEB)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia; Grelle, Austin
2015-06-28
Many advanced small modular reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize with a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper describes the most promising options: mechanistic techniques, which share qualities with conventional probabilistic methods, and simulation-based techniques, which explicitly account for time-dependent processes. The primary intention of this paper is to describe the strengths and weaknesses of each methodology and highlight the lessons learned while applying the two techniques while providing high-level results. This includes the global benefits and deficiencies of the methods and practical problems encountered during the implementation of each technique.
Probabilistic Models over Ordered Partitions with Application in Learning to Rank
Truyen, Tran The; Venkatesh, Svetha
2010-01-01
This paper addresses the general problem of modelling and learning rank data with ties. We propose a probabilistic generative model, that models the process as permutations over partitions. This results in super-exponential combinatorial state space with unknown numbers of partitions and unknown ordering among them. We approach the problem from the discrete choice theory, where subsets are chosen in a stagewise manner, reducing the state space per each stage significantly. Further, we show that with suitable parameterisation, we can still learn the models in linear time. We evaluate the proposed models on the problem of learning to rank with the data from the recently held Yahoo! challenge, and demonstrate that the models are competitive against well-known rivals.
Kurzawa, Nils; Summerfield, Christopher; Bogacz, Rafal
2017-02-01
Much experimental evidence suggests that during decision making, neural circuits accumulate evidence supporting alternative options. A computational model well describing this accumulation for choices between two options assumes that the brain integrates the log ratios of the likelihoods of the sensory inputs given the two options. Several models have been proposed for how neural circuits can learn these log-likelihood ratios from experience, but all of these models introduced novel and specially dedicated synaptic plasticity rules. Here we show that for a certain wide class of tasks, the log-likelihood ratios are approximately linearly proportional to the expected rewards for selecting actions. Therefore, a simple model based on standard reinforcement learning rules is able to estimate the log-likelihood ratios from experience and on each trial accumulate the log-likelihood ratios associated with presented stimuli while selecting an action. The simulations of the model replicate experimental data on both behavior and neural activity in tasks requiring accumulation of probabilistic cues. Our results suggest that there is no need for the brain to support dedicated plasticity rules, as the standard mechanisms proposed to describe reinforcement learning can enable the neural circuits to perform efficient probabilistic inference.
Brain networks for confidence weighting and hierarchical inference during probabilistic learning.
Meyniel, Florent; Dehaene, Stanislas
2017-05-09
Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This "confidence weighting" implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain's learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.
Barsky, Murray M; Tucker, Matthew A; Stickgold, Robert
2015-07-01
During wakefulness the brain creates meaningful relationships between disparate stimuli in ways that escape conscious awareness. Processes active during sleep can strengthen these relationships, leading to more adaptive use of those stimuli when encountered during subsequent wake. Performance on the Weather Prediction Task (WPT), a well-studied measure of implicit probabilistic learning, has been shown to improve significantly following a night of sleep, with stronger initial learning predicting more nocturnal REM sleep. We investigated this relationship further, studying the effect on WPT performance of a daytime nap containing REM sleep. We also added an interference condition after the nap/wake period as an additional probe of memory strength. Our results show that a nap significantly boosts WPT performance, and that this improvement is correlated with the amount of REM sleep obtained during the nap. When interference training is introduced following the nap, however, this REM-sleep benefit vanishes. In contrast, following an equal period of wake, performance is both unchanged from training and unaffected by interference training. Thus, while the true probabilistic relationships between WPT stimuli are strengthened by sleep, these changes are selectively susceptible to the destructive effects of retroactive interference, at least in the short term.
Thoma, Patrizia; Edel, Marc-Andreas; Suchan, Boris; Bellebaum, Christian
2015-01-30
Attention Deficit Hyperactivity Disorder (ADHD) is hypothesized to be characterized by altered reinforcement sensitivity. The main aim of the present study was to assess alterations in the electrophysiological correlates of monetary reward processing in adult patients with ADHD of the combined subtype. Fourteen adults with ADHD of the combined subtype and 14 healthy control participants performed an active and an observational probabilistic reward-based learning task while an electroencephalogramm (EEG) was recorded. Regardless of feedback valence, there was a general feedback-related negativity (FRN) enhancement in combination with reduced learning performance during both active and observational reward learning in patients with ADHD relative to healthy controls. Other feedback-locked potentials such as the P200 and P300 and response-locked potentials were unaltered in the patients. There were no significant correlations between learning performance, FRN amplitudes and clinical symptoms, neither in the overall group involving all participants, nor in patients or controls considered separately. This pattern of findings might reflect generally impaired reward prediction in adults with ADHD of the combined subtype. We demonstrated for the first time that patients with ADHD of the combined subtype show not only deficient active reward learning but are also impaired when learning by observing other people׳s outcomes.
Ahmed A. Moustafa; Gluck, Mark A.; Herzallah, Mohammad M.; Myers, Catherine E.
2015-01-01
Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incor...
Mølgaard, Lasse L.; Buus, Ole T.; Larsen, Jan; Babamoradi, Hamid; Thygesen, Ida L.; Laustsen, Milan; Munk, Jens Kristian; Dossi, Eleftheria; O'Keeffe, Caroline; Lässig, Lina; Tatlow, Sol; Sandström, Lars; Jakobsen, Mogens H.
2017-05-01
We present a data-driven machine learning approach to detect drug- and explosives-precursors using colorimetric sensor technology for air-sampling. The sensing technology has been developed in the context of the CRIM-TRACK project. At present a fully- integrated portable prototype for air sampling with disposable sensing chips and automated data acquisition has been developed. The prototype allows for fast, user-friendly sampling, which has made it possible to produce large datasets of colorimetric data for different target analytes in laboratory and simulated real-world application scenarios. To make use of the highly multi-variate data produced from the colorimetric chip a number of machine learning techniques are employed to provide reliable classification of target analytes from confounders found in the air streams. We demonstrate that a data-driven machine learning method using dimensionality reduction in combination with a probabilistic classifier makes it possible to produce informative features and a high detection rate of analytes. Furthermore, the probabilistic machine learning approach provides a means of automatically identifying unreliable measurements that could produce false predictions. The robustness of the colorimetric sensor has been evaluated in a series of experiments focusing on the amphetamine pre-cursor phenylacetone as well as the improvised explosives pre-cursor hydrogen peroxide. The analysis demonstrates that the system is able to detect analytes in clean air and mixed with substances that occur naturally in real-world sampling scenarios. The technology under development in CRIM-TRACK has the potential as an effective tool to control trafficking of illegal drugs, explosive detection, or in other law enforcement applications.
Buchsbaum, Daphna; Seiver, Elizabeth; Bridgers, Sophie; Gopnik, Alison
2012-01-01
A major challenge children face is uncovering the causal structure of the world around them. Previous research on children's causal inference has demonstrated their ability to learn about causal relationships in the physical environment using probabilistic evidence. However, children must also learn about causal relationships in the social environment, including discovering the causes of other people's behavior, and understanding the causal relationships between others' goal-directed actions and the outcomes of those actions. In this chapter, we argue that social reasoning and causal reasoning are deeply linked, both in the real world and in children's minds. Children use both types of information together and in fact reason about both physical and social causation in fundamentally similar ways. We suggest that children jointly construct and update causal theories about their social and physical environment and that this process is best captured by probabilistic models of cognition. We first present studies showing that adults are able to jointly infer causal structure and human action structure from videos of unsegmented human motion. Next, we describe how children use social information to make inferences about physical causes. We show that the pedagogical nature of a demonstrator influences children's choices of which actions to imitate from within a causal sequence and that this social information interacts with statistical causal evidence. We then discuss how children combine evidence from an informant's testimony and expressed confidence with evidence from their own causal observations to infer the efficacy of different potential causes. We also discuss how children use these same causal observations to make inferences about the knowledge state of the social informant. Finally, we suggest that psychological causation and attribution are part of the same causal system as physical causation. We present evidence that just as children use covariation between
Wilkinson, Leonora; Teo, James T.; Obeso, Ignacio; Rothwell, John C.; Jahanshahi, Marjan
2010-01-01
Theta burst transcranial magnetic stimulation (TBS) is considered to produce plastic changes in human motor cortex. Here, we examined the inhibitory and excitatory effects of TBS on implicit sequence learning using a probabilistic serial reaction time paradigm. We investigated the involvement of several cortical regions associated with implicit…
Chikalov, Igor
2011-02-15
Background: Hydrogen bonds (H-bonds) play a key role in both the formation and stabilization of protein structures. They form and break while a protein deforms, for instance during the transition from a non-functional to a functional state. The intrinsic strength of an individual H-bond has been studied from an energetic viewpoint, but energy alone may not be a very good predictor.Methods: This paper describes inductive learning methods to train protein-independent probabilistic models of H-bond stability from molecular dynamics (MD) simulation trajectories of various proteins. The training data contains 32 input attributes (predictors) that describe an H-bond and its local environment in a conformation c and the output attribute is the probability that the H-bond will be present in an arbitrary conformation of this protein achievable from c within a time duration ?. We model dependence of the output variable on the predictors by a regression tree.Results: Several models are built using 6 MD simulation trajectories containing over 4000 distinct H-bonds (millions of occurrences). Experimental results demonstrate that such models can predict H-bond stability quite well. They perform roughly 20% better than models based on H-bond energy alone. In addition, they can accurately identify a large fraction of the least stable H-bonds in a conformation. In most tests, about 80% of the 10% H-bonds predicted as the least stable are actually among the 10% truly least stable. The important attributes identified during the tree construction are consistent with previous findings.Conclusions: We use inductive learning methods to build protein-independent probabilistic models to study H-bond stability, and demonstrate that the models perform better than H-bond energy alone. 2011 Chikalov et al; licensee BioMed Central Ltd.
Martínez-Velázquez, Eduardo S; Ramos-Loyo, Julieta; González-Garrido, Andrés A; Sequeira, Henrique
2015-01-21
Feedback-related negativity (FRN) is a negative deflection that appears around 250 ms after the gain or loss of feedback to chosen alternatives in a gambling task in frontocentral regions following outcomes. Few studies have reported FRN enhancement in adolescents compared with adults in a gambling task without probabilistic reinforcement learning, despite the fact that learning from positive or negative consequences is crucial for decision-making during adolescence. Therefore, the aim of the present research was to identify differences in FRN amplitude and latency between adolescents and adults on a gambling task with favorable and unfavorable probabilistic reinforcement learning conditions, in addition to a nonlearning condition with monetary gains and losses. Higher rate scores of high-magnitude choices during the final 30 trials compared with the first 30 trials were observed during the favorable condition, whereas lower rates were observed during the unfavorable condition in both groups. Higher FRN amplitude in all conditions and longer latency in the nonlearning condition were observed in adolescents compared with adults and in relation to losses. Results indicate that both the adolescents and the adults improved their performance in relation to positive and negative feedback. However, the FRN findings suggest an increased sensitivity to external feedback to losses in adolescents compared with adults, irrespective of the presence or absence of probabilistic reinforcement learning. These results reflect processing differences on the neural monitoring system and provide new perspectives on the dynamic development of an adolescent's brain.
Lancaster, Thomas M; Ihssen, Niklas; Brindley, Lisa M; Tansey, Katherine E; Mantripragada, Kiran; O'Donovan, Michael C; Owen, Michael J; Linden, David E J
2016-02-01
A substantial proportion of schizophrenia liability can be explained by additive genetic factors. Risk profile scores (RPS) directly index risk using a summated total of common risk variants weighted by their effect. Previous studies suggest that schizophrenia RPS predict alterations to neural networks that support working memory and verbal fluency. In this study, we apply schizophrenia RPS to fMRI data to elucidate the effects of polygenic risk on functional brain networks during a probabilistic-learning neuroimaging paradigm. The neural networks recruited during this paradigm have previously been shown to be altered to unmedicated schizophrenia patients and relatives of schizophrenia patients, which may reflect genetic susceptibility. We created schizophrenia RPS using summary data from the Psychiatric Genetic Consortium (Schizophrenia Working Group) for 83 healthy individuals and explore associations between schizophrenia RPS and blood oxygen level dependency (BOLD) during periods of choice behavior (switch-stay) and reflection upon choice outcome (reward-punishment). We show that schizophrenia RPS is associated with alterations in the frontal pole (PWHOLE-BRAIN-CORRECTED = 0.048) and the ventral striatum (PROI-CORRECTED = 0.036), during choice behavior, but not choice outcome. We suggest that the common risk variants that increase susceptibility to schizophrenia can be associated with alterations in the neural circuitry that support the processing of changing reward contingencies. Hum Brain Mapp 37:491-500, 2016. © 2015 Wiley Periodicals, Inc.
Myers, Catherine E; Sheynin, Jony; Balsdon, Tarryn; Luzardo, Andre; Beck, Kevin D; Hogarth, Lee; Haber, Paul; Moustafa, Ahmed A
2016-01-01
Addiction is the continuation of a habit in spite of negative consequences. A vast literature gives evidence that this poor decision-making behavior in individuals addicted to drugs also generalizes to laboratory decision making tasks, suggesting that the impairment in decision-making is not limited to decisions about taking drugs. In the current experiment, opioid-addicted individuals and matched controls with no history of illicit drug use were administered a probabilistic classification task that embeds both reward-based and punishment-based learning trials, and a computational model of decision making was applied to understand the mechanisms describing individuals' performance on the task. Although behavioral results showed that opioid-addicted individuals performed as well as controls on both reward- and punishment-based learning, the modeling results suggested subtle differences in how decisions were made between the two groups. Specifically, the opioid-addicted group showed decreased tendency to repeat prior responses, meaning that they were more likely to "chase reward" when expectancies were violated, whereas controls were more likely to stick with a previously-successful response rule, despite occasional expectancy violations. This tendency to chase short-term reward, potentially at the expense of developing rules that maximize reward over the long term, may be a contributing factor to opioid addiction. Further work is indicated to better understand whether this tendency arises as a result of brain changes in the wake of continued opioid use/abuse, or might be a pre-existing factor that may contribute to risk for addiction.
Rapid probabilistic source characterisation in 3D earth models using learning algorithms
Valentine, A. P.; Kaeufl, P.; Trampert, J.
2015-12-01
Characterising earthquake sources rapidly and robustly is an essential component of any earthquake early warning (EEW) procedure. Ideally, this characterisation should:(i) be probabilistic -- enabling appreciation of the full range of mechanisms compatible with available data, and taking observational and theoretical uncertainties into account; and(ii) operate in a physically-complete theoretical framework.However, implementing either of these ideals increases computational costs significantly, making it unfeasible to satisfy both in the short timescales necessary for EEW applications.The barrier here arises from the fact that conventional probabilistic inversion techniques involve running many thousands of forward simulations after data has been obtained---a procedure known as `posterior sampling'. Thus, for EEW, all computational costs must be incurred after the event time. Here, we demonstrate a new approach---based instead on `prior sampling'---which circumvents this problem and is feasible for EEW applications. All forward simulations are conducted in advance, and a learning algorithm is used to assimilate information about the relationship between model and data. Once observations from an earthquake become available, this information can be used to infer probability density functions (pdfs) for seismic source parameters, within milliseconds.We demonstrate this procedure using data from the 2008 Mw5.4 Chino Hills earthquake. We compute Green's functions for 150 randomly-chosen locations on the Whittier and Chino faults, using SPECFEM3D and a 3D model of the regional velocity structure. We then use these to train neural networks that map from seismic waveforms to pdfs on a point-source, moment-tensor representation of the event mechanism. We show that using local network data from the Chino Hills event, this system provides accurate information on magnitude, epicentral location and source half-duration using data available 6 seconds after the first station
Cáceres, Pablo; San Martín, René
2017-01-01
Many advances have been made over the last decades in describing, on the one hand, the link between reward-based learning and decision-making, and on the other hand, the link between impulsivity and decision-making. However, the association between reward-based learning and impulsivity remains poorly understood. In this study, we evaluated the association between individual differences in loss-minimizing and gain-maximizing behavior in a learning-based probabilistic decision-making task and individual differences in cognitive impulsivity. We found that low cognitive impulsivity was associated both with a better performance minimizing losses and maximizing gains during the task. These associations remained significant after controlling for mathematical skills and gender as potential confounders. We discuss potential mechanisms through which cognitive impulsivity might interact with reward-based learning and decision-making. PMID:28261137
Cáceres, Pablo; San Martín, René
2017-01-01
Many advances have been made over the last decades in describing, on the one hand, the link between reward-based learning and decision-making, and on the other hand, the link between impulsivity and decision-making. However, the association between reward-based learning and impulsivity remains poorly understood. In this study, we evaluated the association between individual differences in loss-minimizing and gain-maximizing behavior in a learning-based probabilistic decision-making task and individual differences in cognitive impulsivity. We found that low cognitive impulsivity was associated both with a better performance minimizing losses and maximizing gains during the task. These associations remained significant after controlling for mathematical skills and gender as potential confounders. We discuss potential mechanisms through which cognitive impulsivity might interact with reward-based learning and decision-making.
Chikalov, Igor
2011-04-02
Hydrogen bonds (H-bonds) play a key role in both the formation and stabilization of protein structures. H-bonds involving atoms from residues that are close to each other in the main-chain sequence stabilize secondary structure elements. H-bonds between atoms from distant residues stabilize a protein’s tertiary structure. However, H-bonds greatly vary in stability. They form and break while a protein deforms. For instance, the transition of a protein from a nonfunctional to a functional state may require some H-bonds to break and others to form. The intrinsic strength of an individual H-bond has been studied from an energetic viewpoint, but energy alone may not be a very good predictor. Other local interactions may reinforce (or weaken) an H-bond. This paper describes inductive learning methods to train a protein-independent probabilistic model of H-bond stability from molecular dynamics (MD) simulation trajectories. The training data describes H-bond occurrences at successive times along these trajectories by the values of attributes called predictors. A trained model is constructed in the form of a regression tree in which each non-leaf node is a Boolean test (split) on a predictor. Each occurrence of an H-bond maps to a path in this tree from the root to a leaf node. Its predicted stability is associated with the leaf node. Experimental results demonstrate that such models can predict H-bond stability quite well. In particular, their performance is roughly 20% better than that of models based on H-bond energy alone. In addition, they can accurately identify a large fraction of the least stable H-bonds in a given conformation. The paper discusses several extensions that may yield further improvements.
Directory of Open Access Journals (Sweden)
Kristofer E Bouchard
2015-07-01
Full Text Available The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability, as well as the probability of having transitioned to the current state from previous states (backward probability. Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance and STDP with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions.
Bakic, Jasmina; De Raedt, Rudi; Jepma, Marieke; Pourtois, Gilles
2015-01-01
According to dominant neuropsychological theories of affect, emotions signal salience of events and in turn facilitate a wide spectrum of response options or action tendencies. Valence of an emotional experience is pivotal here, as it alters reward and punishment processing, as well as the balance between safety and risk taking, which can be translated into changes in the exploration-exploitation trade-off during reinforcement learning (RL). To test this idea, we compared the behavioral performance of three groups of participants that all completed a variant of a standard probabilistic learning task, but who differed regarding which mood state was actually induced and maintained (happy, sad or neutral). To foster a change from an exploration to an exploitation-based mode, we removed feedback information once learning was reliably established. Although changes in mood were successful, learning performance was balanced between the three groups. Critically, when focusing on exploitation-driven learning only, they did not differ either. Moreover, mood valence did not alter the learning rate or exploration per se, when titrated using complementing computational modeling. By comparing systematically these results to our previous study (Bakic et al., 2014), we found that arousal levels did differ between studies, which might account for limited modulatory effects of (positive) mood on RL in the present case. These results challenge the assumption that mood valence alone is enough to create strong shifts in the way exploitation or exploration is eventually carried out during (probabilistic) learning. In this context, we discuss the possibility that both valence and arousal are actually necessary components of the emotional mood state to yield changes in the use and exploration of incentives cues during RL.
Directory of Open Access Journals (Sweden)
Jasmina eBakic
2015-10-01
Full Text Available According to dominant neuropsychological theories of affect, emotions signal salience of events and in turn facilitate a wide spectrum of response options or action tendencies. Valence of an emotional experience is pivotal here, as it alters reward and punishment processing, as well as the balance between safety and risk taking, which can be translated into changes in the exploration-exploitation trade-off during reinforcement learning (RL. To test this idea, we compared the behavioral performance of three groups of participants that all completed a variant of a standard probabilistic learning task, but who differed regarding which mood state was actually induced and maintained (happy, sad or neutral. To foster a change from an exploration to an exploitation-based mode, we removed feedback information once learning was reliably established. Although changes in mood were successful, learning performance was balanced between the three groups. Critically, when focusing on exploitation-driven learning only, they did not differ either. Moreover, mood valence did not alter the learning rate or exploration per se, when titrated using complementing computational modeling. By comparing systematically these results to our previous study (Bakic, Jepma, De Raedt, & Pourtois, 2014, we found that arousal levels did differ between studies, which might account for limited modulatory effects of (positive mood on RL in the present case. These results challenge the assumption that mood valence alone is enough to create strong shifts in the way exploitation or exploration is eventually carried out during (probabilistic learning. In this context, we discuss the possibility that both valence and arousal are actually necessary components of the emotional mood state to yield changes in the use and exploration of incentives cues during RL.
Constraint-Based Modeling: From Cognitive Theory to Computer Tutoring--and Back Again
Ohlsson, Stellan
2016-01-01
The ideas behind the constraint-based modeling (CBM) approach to the design of intelligent tutoring systems (ITSs) grew out of attempts in the 1980's to clarify how declarative and procedural knowledge interact during skill acquisition. The learning theory that underpins CBM was based on two conceptual innovations. The first innovation was to…
Fuzzy Constraint-Based Agent Negotiation
Institute of Scientific and Technical Information of China (English)
Menq-Wen Lin; K. Robert Lai; Ting-Jung Yu
2005-01-01
Conflicts between two or more parties arise for various reasons and perspectives. Thus, resolution of conflicts frequently relies on some form of negotiation. This paper presents a general problem-solving framework for modeling multi-issue multilateral negotiation using fuzzy constraints. Agent negotiation is formulated as a distributed fuzzy constraint satisfaction problem (DFCSP). Fuzzy constrains are thus used to naturally represent each agent's desires involving imprecision and human conceptualization, particularly when lexical imprecision and subjective matters are concerned. On the other hand, based on fuzzy constraint-based problem-solving, our approach enables an agent not only to systematically relax fuzzy constraints to generate a proposal, but also to employ fuzzy similarity to select the alternative that is subject to its acceptability by the opponents. This task of problem-solving is to reach an agreement that benefits all agents with a high satisfaction degree of fuzzy constraints, and move towards the deal more quickly since their search focuses only on the feasible solution space. An application to multilateral negotiation of a travel planning is provided to demonstrate the usefulness and effectiveness of our framework.
Murata, Shingo; Yamashita, Yuichi; Arie, Hiroaki; Ogata, Tetsuya; Sugano, Shigeki; Tani, Jun
2017-04-01
We suggest that different behavior generation schemes, such as sensory reflex behavior and intentional proactive behavior, can be developed by a newly proposed dynamic neural network model, named stochastic multiple timescale recurrent neural network (S-MTRNN). The model learns to predict subsequent sensory inputs, generating both their means and their uncertainty levels in terms of variance (or inverse precision) by utilizing its multiple timescale property. This model was employed in robotics learning experiments in which one robot controlled by the S-MTRNN was required to interact with another robot under the condition of uncertainty about the other's behavior. The experimental results show that self-organized and sensory reflex behavior-based on probabilistic prediction-emerges when learning proceeds without a precise specification of initial conditions. In contrast, intentional proactive behavior with deterministic predictions emerges when precise initial conditions are available. The results also showed that, in situations where unanticipated behavior of the other robot was perceived, the behavioral context was revised adequately by adaptation of the internal neural dynamics to respond to sensory inputs during sensory reflex behavior generation. On the other hand, during intentional proactive behavior generation, an error regression scheme by which the internal neural activity was modified in the direction of minimizing prediction errors was needed for adequately revising the behavioral context. These results indicate that two different ways of treating uncertainty about perceptual events in learning, namely, probabilistic modeling and deterministic modeling, contribute to the development of different dynamic neuronal structures governing the two types of behavior generation schemes.
Marciano, Michael A; Adelman, Jonathan D
2017-03-01
The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016
Bonawitz, Elizabeth; Denison, Stephanie; Griffiths, Thomas L; Gopnik, Alison
2014-10-01
Although probabilistic models of cognitive development have become increasingly prevalent, one challenge is to account for how children might cope with a potentially vast number of possible hypotheses. We propose that children might address this problem by 'sampling' hypotheses from a probability distribution. We discuss empirical results demonstrating signatures of sampling, which offer an explanation for the variability of children's responses. The sampling hypothesis provides an algorithmic account of how children might address computationally intractable problems and suggests a way to make sense of their 'noisy' behavior.
Wakker, P. P.; Thaler, R.H.; Tversky, A.
1997-01-01
textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these preferences are intuitively appealing they are difficult to reconcile with expected utility theory. Under highly plausible assumptions about the utility function, willingness to pay for probabilistic i...
Zendehrouh, Sareh
2015-11-01
Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md
2015-12-01
Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.
Directory of Open Access Journals (Sweden)
Ahmed A. Moustafa
2015-07-01
Full Text Available Previous research has shown that trial ordering affects cognitive performance, but this has not been not tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment and three possible outcomes: (1 positive feedback for correct responses in reward trials, (2 negative feedback for incorrect responses in punishment trials, and (3 no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between
Wakker, P.P.; Thaler, R.H.; Tversky, A.
1997-01-01
Probabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in premium to compensate for a 1% default risk. These observations cannot be
P.P. Wakker (Peter); R.H. Thaler (Richard); A. Tversky (Amos)
1997-01-01
textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these
DEFF Research Database (Denmark)
Jensen, Finn Verner; Lauritzen, Steffen Lilholt
2001-01-01
This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs.......This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs....
P.P. Wakker (Peter); R.H. Thaler (Richard); A. Tversky (Amos)
1997-01-01
textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these pref
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
2015-01-01
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…
Constraint Based Multi-task Learning Approach and its Application%一种结合限制的多任务学习策略及其应用
Institute of Scientific and Technical Information of China (English)
何振峰; 余春艳; 陆昌华
2011-01-01
The partial order constraint between tasks is defined. Based upon these constraints, a group of originally independent tasks can be associated. The application of the partial order constraint is explored, a co-evolutionary multitask learning framework is presented, the learning process is a repeatedly process of independently evolution for each task and associated adjustment of taste. The application of the framework in the construction of the loss curves of pig chilling process is ansJyzed. Take into consideration of the partial order relation between low humid loss curve and mid humid loss curve, through a process of co-evolution, two reasonable curves can be constructed when there are only a few samples. The test result on four bench test functions suggests this approach is effective in more general condition.%定义任务之间的偏序限制,基于偏序限制可以联系原先独立的任务.分析偏序限制的应用,给出一个协同演化的多任务学习框架,它反复地通过各个任务的独立演化以寻优,通过联合调整以结合偏序限制.给出本框架在构建猪肉预冷损耗曲线过程中的应用:考虑了低湿损耗曲线与中湿损耗曲线间的偏序关系,利用协同演化,在样本.量很少时,也能获得合理的低温和申温损耗曲线.对于4个标准测试函数的测试显示了本策略对于一般问题的有效性.
Suciu, Dan; Koch, Christop
2011-01-01
Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for rep
National Research Council Canada - National Science Library
Samanez-Larkin, Gregory R; Levens, Sara M; Perry, Lee M; Dougherty, Robert F; Knutson, Brian
2012-01-01
.... This cross-sectional study examined whether age differences in frontostriatal white matter integrity could account for age differences in reward learning in a community life span sample of human adults...
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
2015-03-01
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front-ranging from issues of generativity to the replication of human experimental findings-by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach. Copyright © 2014 Cognitive Science Society, Inc.
Button, Katherine S; Kounali, Daphne; Stapinski, Lexine; Rapee, Ronald M; Lewis, Glyn; Munafò, Marcus R
2015-01-01
Fear of negative evaluation (FNE) defines social anxiety yet the process of inferring social evaluation, and its potential role in maintaining social anxiety, is poorly understood. We developed an instrumental learning task to model social evaluation learning, predicting that FNE would specifically bias learning about the self but not others. During six test blocks (3 self-referential, 3 other-referential), participants (n = 100) met six personas and selected a word from a positive/negative pair to finish their social evaluation sentences "I think [you are / George is]…". Feedback contingencies corresponded to 3 rules, liked, neutral and disliked, with P[positive word correct] = 0.8, 0.5 and 0.2, respectively. As FNE increased participants selected fewer positive words (β = -0.4, 95% CI -0.7, -0.2, p = 0.001), which was strongest in the self-referential condition (FNE × condition 0.28, 95% CI 0.01, 0.54, p = 0.04), and the neutral and dislike rules (FNE × condition × rule, p = 0.07). At low FNE the proportion of positive words selected for self-neutral and self-disliked greatly exceeded the feedback contingency, indicating poor learning, which improved as FNE increased. FNE is associated with differences in processing social-evaluative information specifically about the self. At low FNE this manifests as insensitivity to learning negative self-referential evaluation. High FNE individuals are equally sensitive to learning positive or negative evaluation, which although objectively more accurate, may have detrimental effects on mental health.
Moustafa, Ahmed A; Gluck, Mark A; Herzallah, Mohammad M; Myers, Catherine E
2015-01-01
Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incorrect responses in punishment trials; and (3) no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between subjects in the reward
Yildirim, Ilker; Jacobs, Robert A
2015-06-01
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
Perspectives of Probabilistic Inferences: Reinforcement Learning and an Adaptive Network Compared
Rieskamp, Jorg
2006-01-01
The assumption that people possess a strategy repertoire for inferences has been raised repeatedly. The strategy selection learning theory specifies how people select strategies from this repertoire. The theory assumes that individuals select strategies proportional to their subjective expectations of how well the strategies solve particular…
A History of Probabilistic Inductive Logic Programming
Directory of Open Access Journals (Sweden)
Fabrizio eRiguzzi
2014-09-01
Full Text Available The field of Probabilistic Logic Programming (PLP has seen significant advances in the last 20 years, with many proposals for languages that combine probability with logic programming. Since the start, the problem of learning probabilistic logic programs has been the focus of much attention. Learning these programs represents a whole subfield of Inductive Logic Programming (ILP. In Probabilistic ILP (PILP two problems are considered: learning the parameters of a program given the structure (the rules and learning both the structure and the parameters. Usually structure learning systems use parameter learning as a subroutine. In this article we present an overview of PILP and discuss the main results.
Large-Scale Constraint-Based Pattern Mining
Zhu, Feida
2009-01-01
We studied the problem of constraint-based pattern mining for three different data formats, item-set, sequence and graph, and focused on mining patterns of large sizes. Colossal patterns in each data formats are studied to discover pruning properties that are useful for direct mining of these patterns. For item-set data, we observed robustness of…
Teaching Database Design with Constraint-Based Tutors
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
Probabilistic approaches to recommendations
Barbieri, Nicola; Ritacco, Ettore
2014-01-01
The importance of accurate recommender systems has been widely recognized by academia and industry, and recommendation is rapidly becoming one of the most successful applications of data mining and machine learning. Understanding and predicting the choices and preferences of users is a challenging task: real-world scenarios involve users behaving in complex situations, where prior beliefs, specific tendencies, and reciprocal influences jointly contribute to determining the preferences of users toward huge amounts of information, services, and products. Probabilistic modeling represents a robus
Douven, Igor; Horsten, Leon; Romeijn, Jan-Willem
2010-01-01
Until now, antirealists have offered sketches of a theory of truth, at best. In this paper, we present a probabilist account of antirealist truth in some formal detail, and we assess its ability to deal with the problems that are standardly taken to beset antirealism.
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Burcharth, H. F.
This chapter describes how partial safety factors can be used in design of vertical wall breakwaters and an example of a code format is presented. The partial safety factors are calibrated on a probabilistic basis. The code calibration process used to calibrate some of the partial safety factors...
Bod, R.; Heine, B.; Narrog, H.
2010-01-01
Probabilistic linguistics takes all linguistic evidence as positive evidence and lets statistics decide. It allows for accurate modelling of gradient phenomena in production and perception, and suggests that rule-like behaviour is no more than a side effect of maximizing probability. This chapter
Constraint-based scheduling applying constraint programming to scheduling problems
Baptiste, Philippe; Nuijten, Wim
2001-01-01
Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...
Constraint-Based Partial Evaluation for Imperative Languages
Institute of Scientific and Technical Information of China (English)
JIN Ying(金英); JIN Chengzhi(金成植)
2002-01-01
Constraint-based partial evaluation (CBPE) is a program optimization technique based on partial evaluation (PE) and constraint solving. Conventional PE only utilizes given parameter values to specialize programs. However, CBPE makes use of not only given values but also the following information: (a) the relationship between input parameters and program variables; (b) logical structure of a program to be evaluated. In this paper, a formal description of CBPE method for imperative languages is presented, and some related problems are discussed.
Using Multiple Sources of Information for Constraint-Based Morphological Disambiguation
Tur, G
1999-01-01
This thesis presents a constraint-based morphological disambiguation approach that is applicable to languages with complex morphology--specifically agglutinative languages with productive inflectional and derivational morphological phenomena. For morphologically complex languages like Turkish, automatic morphological disambiguation involves selecting for each token morphological parse(s), with the right set of inflectional and derivational markers. Our system combines corpus independent hand-crafted constraint rules, constraint rules that are learned via unsupervised learning from a training corpus, and additional statistical information obtained from the corpus to be morphologically disambiguated. The hand-crafted rules are linguistically motivated and tuned to improve precision without sacrificing recall. In certain respects, our approach has been motivated by Brill's recent work, but with the observation that his transformational approach is not directly applicable to languages like Turkish. Our approach a...
Probabilistic models of language processing and acquisition.
Chater, Nick; Manning, Christopher D
2006-07-01
Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online corpus creation has enabled large models to be tested, revealing probabilistic constraints in processing, undermining acquisition arguments based on a perceived poverty of the stimulus, and suggesting fruitful links with probabilistic theories of categorization and ambiguity resolution in perception.
Improved transformer protection using probabilistic neural network ...
African Journals Online (AJOL)
user
This article presents a novel technique to distinguish between magnetizing inrush ... Protective relaying, Probabilistic neural network, Active power relays, Power ... Forward Neural Network (MFFNN) with back-propagation learning technique.
Directory of Open Access Journals (Sweden)
Mikaël Cozic
2016-11-01
Full Text Available The modeling of awareness and unawareness is a significant topic in the doxastic logic literature, where it is usually tackled in terms of full belief operators. The present paper aims at a treatment in terms of partial belief operators. It draws upon the modal probabilistic logic that was introduced by Aumann (1999 at the semantic level, and then axiomatized by Heifetz and Mongin (2001. The paper embodies in this framework those properties of unawareness that have been highlighted in the seminal paper by Modica and Rustichini (1999. Their paper deals with full belief, but we argue that the properties in question also apply to partial belief. Our main result is a (soundness and completeness theorem that reunites the two strands—modal and probabilistic—of doxastic logic.
Orhan, A Emin; Ma, Wei Ji
2017-07-26
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
A Constraint-Based Understanding of Design Spaces
DEFF Research Database (Denmark)
Biskjaer, Michael Mose; Dalsgaard, Peter; Halskov, Kim
2014-01-01
space schema, can identify the properties of the prospective product that s/he can form. Through a case study, we show how design space schemas can support designers in various ways, including gaining an overview of the design process, documenting it, reflecting on it, and developing design concepts......This paper suggests a framework for understanding and manoeuvring design spaces based on insights from research into creativity constraints. We define the design space as a conceptual space, which in addition to being co-constituted, explored and developed by the designer encompasses the creativity...... constraints governing the design process. While design spaces can be highly complex, our constraint-based understanding enables us to argue for the benefits of a systematic approach to mapping and manipulating aspects of the design space. We discuss how designers by means of a simple representation, a design...
A Constraint-based Case Frame Lexicon Architecture
Oflazer, K; Oflazer, Kemal; Yilmaz, Okan
1995-01-01
In Turkish, (and possibly in many other languages) verbs often convey several meanings (some totally unrelated) when they are used with subjects, objects, oblique objects, adverbial adjuncts, with certain lexical, morphological, and semantic features, and co-occurrence restrictions. In addition to the usual sense variations due to selectional restrictions on verbal arguments, in most cases, the meaning conveyed by a case frame is idiomatic and not compositional, with subtle constraints. In this paper, we present an approach to building a constraint-based case frame lexicon for use in natural language processing in Turkish, whose prototype we have implemented under the TFS system developed at Univ. of Stuttgart. A number of observations that we have made on Turkish have indicated that we need something beyond the traditional transitive and intransitive distinction, and utilize a framework where verb valence is considered as the obligatory co-existence of an arbitrary subset of possible arguments along with the...
Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna
2015-01-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.
2015-12-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
Research on constraint-based virtual assembly technologies
Institute of Scientific and Technical Information of China (English)
YANG Rundang; WU Dianliang; FAN Xiumin; YAN Juanqi
2007-01-01
To realize a constraint-based virtual assembly operation,the unified representations of the assembly constraint,the equivalent relation between the constraint and the degree of freedom(DOF),and the movement DOF reduction in a virtual environment are proposed.Several algorithms about the constraint treatment are submitted.First,the automatic recognition algorithm based on the assembly relation is used to determine the position and orientation relation between two geometry elements of constraint whether they meet the given errors.Second,to satisfy the new constraint,according to the positing solving algorithm,the position and orientation of an active part are modified with minimal adjustment after the part has satisfied the affirmed constraints.Finally,the algorithm of movement navigation based on the generalized coordinate system is put forward,and the part movement is guided.These algorithms have been applied to the integrated virtual assembly environment(IVAE)system.Experiments have indicated that these algorithms have well supported constraint treatments in the IVAE and realized the closer combination of the virtual and the real assembly processes.
Constraint-based soft tissue simulation for virtual surgical training.
Tang, Wen; Wan, Tao Ruan
2014-11-01
Most of surgical simulators employ a linear elastic model to simulate soft tissue material properties due to its computational efficiency and the simplicity. However, soft tissues often have elaborate nonlinear material characteristics. Most prominently, soft tissues are soft and compliant to small strains, but after initial deformations they are very resistant to further deformations even under large forces. Such material characteristic is referred as the nonlinear material incompliant which is computationally expensive and numerically difficult to simulate. This paper presents a constraint-based finite-element algorithm to simulate the nonlinear incompliant tissue materials efficiently for interactive simulation applications such as virtual surgery. Firstly, the proposed algorithm models the material stiffness behavior of soft tissues with a set of 3-D strain limit constraints on deformation strain tensors. By enforcing a large number of geometric constraints to achieve the material stiffness, the algorithm reduces the task of solving stiff equations of motion with a general numerical solver to iteratively resolving a set of constraints with a nonlinear Gauss-Seidel iterative process. Secondly, as a Gauss-Seidel method processes constraints individually, in order to speed up the global convergence of the large constrained system, a multiresolution hierarchy structure is also used to accelerate the computation significantly, making interactive simulations possible at a high level of details. Finally, this paper also presents a simple-to-build data acquisition system to validate simulation results with ex vivo tissue measurements. An interactive virtual reality-based simulation system is also demonstrated.
Processing scalar implicature: a constraint-based approach.
Degen, Judith; Tanenhaus, Michael K
2015-05-01
Three experiments investigated the processing of the implicature associated with some using a "gumball paradigm." On each trial, participants saw an image of a gumball machine with an upper chamber with 13 gumballs and an empty lower chamber. Gumballs then dropped to the lower chamber and participants evaluated statements, such as "You got some of the gumballs." Experiment 1 established that some is less natural for reference to small sets (1, 2, and 3 of the 13 gumballs) and unpartitioned sets (all 13 gumballs) compared to intermediate sets (6-8). Partitive some of was less natural than simple some when used with the unpartitioned set. In Experiment 2, including exact number descriptions lowered naturalness ratings for some with small sets but not for intermediate size sets and the unpartitioned set. In Experiment 3, the naturalness ratings from Experiment 2 predicted response times. The results are interpreted as evidence for a Constraint-Based account of scalar implicature processing and against both two-stage, Literal-First models and pragmatic Default models.
Milienne-Petiot, Morgane; Kesby, James P; Graves, Mary; van Enkhuizen, Jordy; Semenova, Svetlana; Minassian, Arpi; Markou, Athina; Geyer, Mark A; Young, Jared W
2017-02-01
Bipolar disorder (BD) mania patients exhibit poor cognition and reward-seeking/hypermotivation, negatively impacting a patient's quality of life. Current treatments (e.g., lithium), do not treat such deficits. Treatment development has been limited due to a poor understanding of the neural mechanisms underlying these behaviors. Here, we investigated putative mechanisms underlying cognition and reward-seeking/motivational changes relevant to BD mania patients using two validated mouse models and neurochemical analyses. The effects of reducing dopamine transporter (DAT) functioning via genetic (knockdown vs. wild-type littermates), or pharmacological (GBR12909- vs. vehicle-treated C57BL/6J mice) means were assessed in the probabilistic reversal learning task (PRLT), and progressive ratio breakpoint (PRB) test, during either water or chronic lithium treatment. These tasks quantify reward learning and effortful motivation, respectively. Neurochemistry was performed on brain samples of DAT mutants ± chronic lithium using high performance liquid chromatography. Reduced DAT functioning increased reversals in the PRLT, an effect partially attenuated by chronic lithium. Chronic lithium alone slowed PRLT acquisition. Reduced DAT functioning increased motivation (PRB), an effect attenuated by lithium in GBR12909-treated mice. Neurochemical analyses revealed that DAT knockdown mice exhibited elevated homovanillic acid levels, but that lithium had no effect on these elevated levels. Reducing DAT functioning recreates many aspects of BD mania including hypermotivation and improved reversal learning (switching), as well as elevated homovanillic acid levels. Chronic lithium only exerted main effects, impairing learning and elevating norepinephrine and serotonin levels of mice, not specifically treating the underlying mechanisms identified in these models. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Woldegebriel, Michael; Zomer, Paul; Mol, Hans G J; Vivó-Truyols, Gabriel
2016-08-02
In this work, we introduce an automated, efficient, and elegant model to combine all pieces of evidence (e.g., expected retention times, peak shapes, isotope distributions, fragment-to-parent ratio) obtained from liquid chromatography-tandem mass spectrometry (LC-MS/MS/MS) data for screening purposes. Combining all these pieces of evidence requires a careful assessment of the uncertainties in the analytical system as well as all possible outcomes. To-date, the majority of the existing algorithms are highly dependent on user input parameters. Additionally, the screening process is tackled as a deterministic problem. In this work we present a Bayesian framework to deal with the combination of all these pieces of evidence. Contrary to conventional algorithms, the information is treated in a probabilistic way, and a final probability assessment of the presence/absence of a compound feature is computed. Additionally, all the necessary parameters except the chromatographic band broadening for the method are learned from the data in training and learning phase of the algorithm, avoiding the introduction of a large number of user-defined parameters. The proposed method was validated with a large data set and has shown improved sensitivity and specificity in comparison to a threshold-based commercial software package.
Schweizer, B
2005-01-01
Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.
Probabilistic Concurrent Kleene Algebra
Directory of Open Access Journals (Sweden)
Annabelle McIver
2013-06-01
Full Text Available We provide an extension of concurrent Kleene algebras to account for probabilistic properties. The algebra yields a unified framework containing nondeterminism, concurrency and probability and is sound with respect to the set of probabilistic automata modulo probabilistic simulation. We use the resulting algebra to generalise the algebraic formulation of a variant of Jones' rely/guarantee calculus.
Metabolic constraint-based refinement of transcriptional regulatory networks.
Chandrasekaran, Sriram; Price, Nathan D
2013-01-01
There is a strong need for computational frameworks that integrate different biological processes and data-types to unravel cellular regulation. Current efforts to reconstruct transcriptional regulatory networks (TRNs) focus primarily on proximal data such as gene co-expression and transcription factor (TF) binding. While such approaches enable rapid reconstruction of TRNs, the overwhelming combinatorics of possible networks limits identification of mechanistic regulatory interactions. Utilizing growth phenotypes and systems-level constraints to inform regulatory network reconstruction is an unmet challenge. We present our approach Gene Expression and Metabolism Integrated for Network Inference (GEMINI) that links a compendium of candidate regulatory interactions with the metabolic network to predict their systems-level effect on growth phenotypes. We then compare predictions with experimental phenotype data to select phenotype-consistent regulatory interactions. GEMINI makes use of the observation that only a small fraction of regulatory network states are compatible with a viable metabolic network, and outputs a regulatory network that is simultaneously consistent with the input genome-scale metabolic network model, gene expression data, and TF knockout phenotypes. GEMINI preferentially recalls gold-standard interactions (p-value = 10(-172)), significantly better than using gene expression alone. We applied GEMINI to create an integrated metabolic-regulatory network model for Saccharomyces cerevisiae involving 25,000 regulatory interactions controlling 1597 metabolic reactions. The model quantitatively predicts TF knockout phenotypes in new conditions (p-value = 10(-14)) and revealed potential condition-specific regulatory mechanisms. Our results suggest that a metabolic constraint-based approach can be successfully used to help reconstruct TRNs from high-throughput data, and highlights the potential of using a biochemically-detailed mechanistic framework to
Javed, Kamran; Gouriveau, Rafael; Zerhouni, Noureddine; Hissel, Daniel
2016-08-01
Proton Exchange Membrane Fuel Cell (PEMFC) is considered the most versatile among available fuel cell technologies, which qualify for diverse applications. However, the large-scale industrial deployment of PEMFCs is limited due to their short life span and high exploitation costs. Therefore, ensuring fuel cell service for a long duration is of vital importance, which has led to Prognostics and Health Management of fuel cells. More precisely, prognostics of PEMFC is major area of focus nowadays, which aims at identifying degradation of PEMFC stack at early stages and estimating its Remaining Useful Life (RUL) for life cycle management. This paper presents a data-driven approach for prognostics of PEMFC stack using an ensemble of constraint based Summation Wavelet- Extreme Learning Machine (SW-ELM) models. This development aim at improving the robustness and applicability of prognostics of PEMFC for an online application, with limited learning data. The proposed approach is applied to real data from two different PEMFC stacks and compared with ensembles of well known connectionist algorithms. The results comparison on long-term prognostics of both PEMFC stacks validates our proposition.
Mastering probabilistic graphical models using Python
Ankan, Ankur
2015-01-01
If you are a researcher or a machine learning enthusiast, or are working in the data science field and have a basic idea of Bayesian learning or probabilistic graphical models, this book will help you to understand the details of graphical models and use them in your data science problems.
Genome-scale constraint-based modeling of Geobacter metallireducens
Directory of Open Access Journals (Sweden)
Famili Iman
2009-01-01
Full Text Available Abstract Background Geobacter metallireducens was the first organism that can be grown in pure culture to completely oxidize organic compounds with Fe(III oxide serving as electron acceptor. Geobacter species, including G. sulfurreducens and G. metallireducens, are used for bioremediation and electricity generation from waste organic matter and renewable biomass. The constraint-based modeling approach enables the development of genome-scale in silico models that can predict the behavior of complex biological systems and their responses to the environments. Such a modeling approach was applied to provide physiological and ecological insights on the metabolism of G. metallireducens. Results The genome-scale metabolic model of G. metallireducens was constructed to include 747 genes and 697 reactions. Compared to the G. sulfurreducens model, the G. metallireducens metabolic model contains 118 unique reactions that reflect many of G. metallireducens' specific metabolic capabilities. Detailed examination of the G. metallireducens model suggests that its central metabolism contains several energy-inefficient reactions that are not present in the G. sulfurreducens model. Experimental biomass yield of G. metallireducens growing on pyruvate was lower than the predicted optimal biomass yield. Microarray data of G. metallireducens growing with benzoate and acetate indicated that genes encoding these energy-inefficient reactions were up-regulated by benzoate. These results suggested that the energy-inefficient reactions were likely turned off during G. metallireducens growth with acetate for optimal biomass yield, but were up-regulated during growth with complex electron donors such as benzoate for rapid energy generation. Furthermore, several computational modeling approaches were applied to accelerate G. metallireducens research. For example, growth of G. metallireducens with different electron donors and electron acceptors were studied using the genome
Kasanova, Zuzana; Waltz, James A; Strauss, Gregory P; Frank, Michael J; Gold, James M
2011-04-01
Previous research indicates that behavioral performance in simple probability learning tasks can be organized into response strategy classifications that are thought to predict important personal characteristics and individual differences. Typically, relatively small proportion of subjects can be identified as optimizers for effectively exploiting the environment and choosing the more rewarding stimulus nearly all of the time. In contrast, the vast majority of subjects behaves sub-optimally and adopts the matching or super-matching strategy, apportioning their responses in a way that matches or slightly exceeds the probabilities of reinforcement. In the present study, we administered a two-choice probability learning paradigm to 51 individuals with schizophrenia (SZ) and 29 healthy controls (NC) to examine whether there are differences in the proportion of subjects falling into these response strategy classifications, and to determine whether task performance is differentially associated with symptom severity and neuropsychological functioning. Although the sample of SZ patients did not differ from NC in overall rate of learning or end performance, significant clinical differences emerged when patients were divided into optimizing, super-matching and matching subgroups based upon task performance. Patients classified as optimizers, who adopted the most advantageous learning strategy, exhibited higher levels of positive and negative symptoms than their matching and super-matching counterparts. Importantly, when both positive and negative symptoms were considered together, only negative symptom severity was a significant predictor of whether a subject would behave optimally, with each one standard deviation increase in negative symptoms increasing the odds of a patient being an optimizer by as much as 80%. These data provide a rare example of a greater clinical impairment being associated with better behavioral performance.
Jasmina eBakic; Rudi eDe Raedt; Marieke eJepma; Gilles ePourtois
2015-01-01
According to dominant neuropsychological theories of affect, emotions signal salience of events and in turn facilitate a wide spectrum of response options or action tendencies. Valence of an emotional experience is pivotal here, as it alters reward and punishment processing, as well as the balance between safety and risk taking, which can be translated into changes in the exploration-exploitation trade-off during reinforcement learning (RL). To test this idea, we compared the behavioral perfo...
Bakic, Jasmina; de Raedt, Rudi; Jepma, Marieke; Pourtois, Gilles
2015-01-01
According to dominant neuropsychological theories of affect, emotions signal salience of events and in turn facilitate a wide spectrum of response options or action tendencies. Valence of an emotional experience is pivotal here, as it alters reward and punishment processing, as well as the balance between safety and risk taking, which can be translated into changes in the exploration exploitation trade-off during reinforcement learning (RL). To test this idea, we compared the behavioral perfo...
Probabilistic Resilience in Hidden Markov Models
Panerati, Jacopo; Beltrame, Giovanni; Schwind, Nicolas; Zeltner, Stefan; Inoue, Katsumi
2016-05-01
Originally defined in the context of ecological systems and environmental sciences, resilience has grown to be a property of major interest for the design and analysis of many other complex systems: resilient networks and robotics systems other the desirable capability of absorbing disruption and transforming in response to external shocks, while still providing the services they were designed for. Starting from an existing formalization of resilience for constraint-based systems, we develop a probabilistic framework based on hidden Markov models. In doing so, we introduce two new important features: stochastic evolution and partial observability. Using our framework, we formalize a methodology for the evaluation of probabilities associated with generic properties, we describe an efficient algorithm for the computation of its essential inference step, and show that its complexity is comparable to other state-of-the-art inference algorithms.
Probabilistic Algorithms in Robotics
Thrun, Sebastian
2000-01-01
This article describes a methodology for programming robots known as probabilistic robotics. The probabilistic paradigm pays tribute to the inherent uncertainty in robot perception, relying on explicit representations of uncertainty when determining what to do. This article surveys some of the progress in the field, using in-depth examples to illustrate some of the nuts and bolts of the basic approach. My central conjecture is that the probabilistic approach to robotics scales better to compl...
Probabilistic liver atlas construction
Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E.
2017-01-01
Background Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. Results A new method for probabilistic atlas con...
Probabilistic Logical Characterization
DEFF Research Database (Denmark)
Hermanns, Holger; Parma, Augusto; Segala, Roberto;
2011-01-01
Probabilistic automata exhibit both probabilistic and non-deterministic choice. They are therefore a powerful semantic foundation for modeling concurrent systems with random phenomena arising in many applications ranging from artificial intelligence, security, systems biology to performance model...... modeling. Several variations of bisimulation and simulation relations have proved to be useful as means to abstract and compare different automata. This paper develops a taxonomy of logical characterizations of these relations on image-finite and image-infinite probabilistic automata....
Holl, Anna K.; Wilkinson, Leonora; Tabrizi, Sarah J.; Painold, Annamaria; Jahanshahi, Marjan
2012-01-01
In general, declarative learning is associated with the activation of the medial temporal lobes (MTL), while the basal ganglia (BG) are considered the substrate for procedural learning. More recently it has been demonstrated the distinction of these systems may not be as absolute as previously thought and that not only the explicit or implicit…
A Constraint-Based Geospatial Data Integration System for Wildfire Management Project
National Aeronautics and Space Administration — We propose to develop a constraint-based system for automatically integrating online, heterogeneous data sources with geospatial data produced by NASA in order to...
A Constraint-Based Geospatial Data Integration System for Wildfire Management Project
National Aeronautics and Space Administration — We propose to implement a constraint-based data integration system for wildfire intelligence, for use during both the pre-planning and event response phases of...
Duplicate Detection in Probabilistic Data
Panse, Fabian; Keulen, van Maurice; Keijzer, de Ander; Ritter, Norbert
2009-01-01
Collected data often contains uncertainties. Probabilistic databases have been proposed to manage uncertain data. To combine data from multiple autonomous probabilistic databases, an integration of probabilistic data has to be performed. Until now, however, data integration approaches have focused o
Probabilistic Dynamic Epistemic Logic
Kooi, B.P.
2003-01-01
In this paper I combine the dynamic epistemic logic of Gerbrandy (1999) with the probabilistic logic of Fagin and Halpern (1999). The result is a new probabilistic dynamic epistemic logic, a logic for reasoning about probability, information, and information change that takes higher order informatio
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Mardare, Radu Iulian; Xue, Bingtian
2016-01-01
We introduce a version of the probabilistic µ-calculus (PMC) built on top of a probabilistic modal logic that allows encoding n-ary inequational conditions on transition probabilities. PMC extends previously studied calculi and we prove that, despite its expressiveness, it enjoys a series of good...
Jiang, P C; Chen, H
2006-01-01
VLSI implementation of probabilistic models is attractive for many biomedical applications. However, hardware non-idealities can prevent probabilistic VLSI models from modelling data optimally through on-chip learning. This paper investigates the maximum computational errors that a probabilistic VLSI model can tolerate when modelling real biomedical data. VLSI circuits capable of achieving the required precision are also proposed.
Interval probabilistic neural network.
Kowalski, Piotr A; Kulczycki, Piotr
2017-01-01
Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.
Farahmand, Touraj; Hamilton, Stuart
2016-04-01
uncertainty used for real-time derivation. Model uncertainty is often ignored but it is in fact an important source of uncertainty caused by building imperfect regression models due to lack of measurement and/or overfitting/under-fitting on data produced by the level of complexity of the model (number of model parameters). In this presentation we demonstrate a solution to these problems using a novel machine learning techniques to use index velocity and field measurement observations with measurement uncertainty to build a non-parametric/non-linear self-adaptive Bayesian.
Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi
2017-08-01
Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.
Ignorability in Statistical and Probabilistic Inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2005-01-01
When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed...
Probabilistic Structural Analysis Program
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
Directory of Open Access Journals (Sweden)
Jiao-Hong Yi
2016-01-01
Full Text Available Probabilistic neural network has successfully solved all kinds of engineering problems in various fields since it is proposed. In probabilistic neural network, Spread has great influence on its performance, and probabilistic neural network will generate bad prediction results if it is improperly selected. It is difficult to select the optimal manually. In this article, a variant of probabilistic neural network with self-adaptive strategy, called self-adaptive probabilistic neural network, is proposed. In self-adaptive probabilistic neural network, Spread can be self-adaptively adjusted and selected and then the best selected Spread is used to guide the self-adaptive probabilistic neural network train and test. In addition, two simplified strategies are incorporated into the proposed self-adaptive probabilistic neural network with the aim of further improving its performance and then two versions of simplified self-adaptive probabilistic neural network (simplified self-adaptive probabilistic neural networks 1 and 2 are proposed. The variants of self-adaptive probabilistic neural networks are further applied to solve the transformer fault diagnosis problem. By comparing them with basic probabilistic neural network, and the traditional back propagation, extreme learning machine, general regression neural network, and self-adaptive extreme learning machine, the results have experimentally proven that self-adaptive probabilistic neural networks have a more accurate prediction and better generalization performance when addressing the transformer fault diagnosis problem.
Ineichen, Christian; Sigrist, Hannes; Spinelli, Simona; Lesch, Klaus-Peter; Sautter, Eva; Seifritz, Erich; Pryce, Christopher R
2012-11-01
Valid animal models of psychopathology need to include behavioural readouts informed by human findings. In the probabilistic reversal learning (PRL) task, human subjects are confronted with serial reversal of the contingency between two operant stimuli and reward/punishment and, superimposed on this, a low probability (0.2) of punished correct responses/rewarded incorrect responses. In depression, reward-stay and reversals completed are unaffected but response-shift following punished correct response trials, referred to as negative feedback sensitivity (NFS), is increased. The aims of this study were to: establish an operant spatial PRL test appropriate for mice; obtain evidence for the processes mediating reward-stay and punishment-shift responding; and assess effects thereon of genetically- and pharmacologically-altered serotonin (5-HT) function. The study was conducted with wildtype (WT) and heterozygous mutant (HET) mice from a 5-HT transporter (5-HTT) null mutant strain. Mice were mildly food deprived and reward was sugar pellet and punishment was 5-s time out. Mice exhibited high motivation and adaptive reversal performance. Increased probability of punished correct response (PCR) trials per session (p = 0.1, 0.2 or 0.3) led to monotonic decrease in reward-stay and reversals completed, suggesting accurate reward prediction. NFS differed from chance-level at p PCR = 0.1, suggesting accurate punishment prediction, whereas NFS was at chance-level at p = 0.2-0.3. At p PCR = 0.1, HET mice exhibited lower NFS than WT mice. The 5-HTT blocker escitalopram was studied acutely at p PCR = 0.2: a low dose (0.5-1.5 mg/kg) resulted in decreased NFS, increased reward-stay and increased reversals completed, and similarly in WT and HET mice. This study demonstrates that testing PRL in mice can provide evidence on the regulation of reward and punishment processing that is, albeit within certain limits, of relevance to human emotional-cognitive processing, its dysfunction and
Probabilistic transmission system planning
Li, Wenyuan
2011-01-01
"The book is composed of 12 chapters and three appendices, and can be divided into four parts. The first part includes Chapters 2 to 7, which discuss the concepts, models, methods and data in probabilistic transmission planning. The second part, Chapters 8 to 11, addresses four essential issues in probabilistic transmission planning applications using actual utility systems as examples. Chapter 12, as the third part, focuses on a special issue, i.e. how to deal with uncertainty of data in probabilistic transmission planning. The fourth part consists of three appendices, which provide the basic knowledge in mathematics for probabilistic planning. Please refer to the attached table of contents which is given in a very detailed manner"--
Conditioning Probabilistic Databases
Koch, Christoph
2008-01-01
Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has lead researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techn...
Tagging French comparing a statistical and a constraint-based method
Chanod, J P; Chanod, Jean-Pierre; Tapanainen, Pasi
1995-01-01
In this paper we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. We imposed a time limit on our experiment: the amount of time spent on the design of our constraint system was about the same as the time we used to train and test the easy-to-implement statistical model. We describe the two systems and compare the results. The accuracy of the statistical method is reasonably good, comparable to taggers for English. But the constraint-based tagger seems to be superior even with the limited time we allowed ourselves for rule development.
Experience Report: Constraint-Based Modelling and Simulation of Railway Emergency Response Plans
DEFF Research Database (Denmark)
Debois, Søren; Hildebrandt, Thomas; Sandberg, Lene
2016-01-01
We report on experiences from a case study applying a constraint-based process-modelling and -simulation tool, dcrgraphs.net, to the modelling and rehearsal of railway emergency response plans with domain experts. The case study confirmed the approach as a viable means for domain experts to analyse...... and security processes in the danish public transport sector and their dependency on ICT....
ASPIRE: An Authoring System and Deployment Environment for Constraint-Based Tutors
Mitrovic, Antonija; Martin, Brent; Suraweera, Pramuditha; Zakharov, Konstantin; Milik, Nancy; Holland, Jay; McGuigan, Nicholas
2009-01-01
Over the last decade, the Intelligent Computer Tutoring Group (ICTG) has implemented many successful constraint-based Intelligent Tutoring Systems (ITSs) in a variety of instructional domains. Our tutors have proven their effectiveness not only in controlled lab studies but also in real classrooms, and some of them have been commercialized.…
An Improved Constraint-based system for the verification of security protocols
Corin, R.J.; Etalle, Sandro; Hermenegildo, Manuel V.; Puebla, German
We propose a constraint-based system for the verification of security protocols that improves upon the one developed by Millen and Shmatikov. Our system features (1) a significantly more efficient implementation, (2) a monotonic behavior, which also allows to detect aws associated to partial runs
Constraint-based solver for the Military unit path finding problem
CSIR Research Space (South Africa)
Leenen, L
2010-04-01
Full Text Available -based approach because it requires flexibility in modelling. The authors formulate the MUPFP as a constraint satisfaction problem and a constraint-based extension of the search algorithm. The concept demonstrator uses a provided map, for example taken from Google...
An Improved Constraint-based system for the verification of security protocols
Corin, Ricardo; Etalle, Sandro; Hermenegildo, Manuel V.; Puebla, German
2002-01-01
We propose a constraint-based system for the verification of security protocols that improves upon the one developed by Millen and Shmatikov. Our system features (1) a significantly more efficient implementation, (2) a monotonic behavior, which also allows to detect aws associated to partial runs an
Probabilistic Belief Logic and Its Probabilistic Aumann Semantics
Institute of Scientific and Technical Information of China (English)
CAO ZiNing(曹子宁); SHI ChunYi(石纯一)
2003-01-01
In this paper, we present a logic system for probabilistic belief named PBL,which expands the language of belief logic by introducing probabilistic belief. Furthermore, wegive the probabilistic Aumann semantics of PBL. We also list some valid properties of belief andprobabilistic belief, which form the deduction system of PBL. Finally, we prove the soundness andcompleteness of these properties with respect to probabilistic Aumann semantics.
Formalizing Probabilistic Safety Claims
Herencia-Zapana, Heber; Hagen, George E.; Narkawicz, Anthony J.
2011-01-01
A safety claim for a system is a statement that the system, which is subject to hazardous conditions, satisfies a given set of properties. Following work by John Rushby and Bev Littlewood, this paper presents a mathematical framework that can be used to state and formally prove probabilistic safety claims. It also enables hazardous conditions, their uncertainties, and their interactions to be integrated into the safety claim. This framework provides a formal description of the probabilistic composition of an arbitrary number of hazardous conditions and their effects on system behavior. An example is given of a probabilistic safety claim for a conflict detection algorithm for aircraft in a 2D airspace. The motivation for developing this mathematical framework is that it can be used in an automated theorem prover to formally verify safety claims.
Probabilistic approach to mechanisms
Sandler, BZ
1984-01-01
This book discusses the application of probabilistics to the investigation of mechanical systems. The book shows, for example, how random function theory can be applied directly to the investigation of random processes in the deflection of cam profiles, pitch or gear teeth, pressure in pipes, etc. The author also deals with some other technical applications of probabilistic theory, including, amongst others, those relating to pneumatic and hydraulic mechanisms and roller bearings. Many of the aspects are illustrated by examples of applications of the techniques under discussion.
Probabilistic conditional independence structures
Studeny, Milan
2005-01-01
Probabilistic Conditional Independence Structures provides the mathematical description of probabilistic conditional independence structures; the author uses non-graphical methods of their description, and takes an algebraic approach.The monograph presents the methods of structural imsets and supermodular functions, and deals with independence implication and equivalence of structural imsets.Motivation, mathematical foundations and areas of application are included, and a rough overview of graphical methods is also given.In particular, the author has been careful to use suitable terminology, and presents the work so that it will be understood by both statisticians, and by researchers in artificial intelligence.The necessary elementary mathematical notions are recalled in an appendix.
Knowledge-based diagnostic system with probabilistic approach
Directory of Open Access Journals (Sweden)
Adina COCU
2007-12-01
Full Text Available This paper presents a knowledge learning diagnostic approach implemented in an educational system. Probabilistic inference is used here to diagnose knowledge understanding level and to reason about probable cause of learner’s misconceptions. When one learner takes an assessment, the system use probabilistic reasoning and will advice the learner about the most appropriate error cause and will also provide, the conforming part of theory which treats errors related to his misconceptions.
How robust are probabilistic models of higher-level cognition?
Marcus, Gary F; Davis, Ernest
2013-12-01
An increasingly popular theory holds that the mind should be viewed as a near-optimal or rational engine of probabilistic inference, in domains as diverse as word learning, pragmatics, naive physics, and predictions of the future. We argue that this view, often identified with Bayesian models of inference, is markedly less promising than widely believed, and is undermined by post hoc practices that merit wholesale reevaluation. We also show that the common equation between probabilistic and rational or optimal is not justified.
A Probabilistic Approach to Knowledge Translation
Jiang, Shangpu; Lowd, Daniel; Dou, Dejing
2015-01-01
In this paper, we focus on a novel knowledge reuse scenario where the knowledge in the source schema needs to be translated to a semantically heterogeneous target schema. We refer to this task as "knowledge translation" (KT). Unlike data translation and transfer learning, KT does not require any data from the source or target schema. We adopt a probabilistic approach to KT by representing the knowledge in the source schema, the mapping between the source and target schemas, and the resulting ...
Probabilistic Causation without Probability.
Holland, Paul W.
The failure of Hume's "constant conjunction" to describe apparently causal relations in science and everyday life has led to various "probabilistic" theories of causation of which the study by P. C. Suppes (1970) is an important example. A formal model that was developed for the analysis of comparative agricultural experiments…
Probabilistic simple sticker systems
Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod
2017-04-01
A model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, was introduced by by L. Kari, G. Paun, G. Rozenberg, A. Salomaa, and S. Yu in the paper entitled DNA computing, sticker systems and universality from the journal of Acta Informatica vol. 35, pp. 401-420 in the year 1998. A sticker system uses the Watson-Crick complementary feature of DNA molecules: starting from the incomplete double stranded sequences, and iteratively using sticking operations until a complete double stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rules generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of sticker systems. Recently, a variant of restricted sticker systems, called probabilistic sticker systems, has been introduced [4]. In this variant, the probabilities are initially associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings in the computation of the string. Strings for the language are selected according to some probabilistic requirements. In this paper, we study fundamental properties of probabilistic simple sticker systems. We prove that the probabilistic enhancement increases the computational power of simple sticker systems.
Probabilistic parsing strategies
Nederhof, Mark-Jan; Satta, Giorgio
We present new results on the relation between purely symbolic context-free parsing strategies and their probabilistic counterparts. Such parsing strategies are seen as constructions of push-down devices from grammars. We show that preservation of probability distribution is possible under two
Bergstra, J.A.; Middelburg, C.A.
2015-01-01
We add probabilistic features to basic thread algebra and its extensions with thread-service interaction and strategic interleaving. Here, threads represent the behaviours produced by instruction sequences under execution and services represent the behaviours exhibited by the components of execution
DEFF Research Database (Denmark)
Chen, Peiyuan; Chen, Zhe; Bak-Jensen, Birgitte
2008-01-01
This paper reviews the development of the probabilistic load flow (PLF) techniques. Applications of the PLF techniques in different areas of power system steady-state analysis are also discussed. The purpose of the review is to identify different available PLF techniques and their corresponding...
Probabilistic dynamic belief revision
Baltag, A.; Smets, S.
2008-01-01
We investigate the discrete (finite) case of the Popper-Renyi theory of conditional probability, introducing discrete conditional probabilistic models for knowledge and conditional belief, and comparing them with the more standard plausibility models. We also consider a related notion, that of safe
Directory of Open Access Journals (Sweden)
Olga Urek
2016-07-01
Full Text Available In this article I provide a representational and a constraint-based analysis of four interacting palatalization processes operative in Modern Standard Latvian: velar affrication, velar palatalization, yod-palatalization and front vowel raising. The main advantage of the representational account developed here is that it treats all of the mentioned Latvian processes as strictly assimilatory, and at the same time avoids purely stipulative mechanisms characteristic of many feature-geometric approaches to cross-category interactions. The article also contributes to the debate on the role of geometric subsegmental representations in constraint-based computational models, by demonstrating that a principled account of locality, transparency and blocking effects in Latvian palatalization requires the reference to hierarchical autosegmental structures.
In Silico Constraint-Based Strain Optimization Methods: the Quest for Optimal Cell Factories.
Maia, Paulo; Rocha, Miguel; Rocha, Isabel
2016-03-01
Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed.
DEFF Research Database (Denmark)
Machado, Daniel; Herrgard, Markus
2014-01-01
Constraint-based models of metabolism are a widely used framework for predicting flux distributions in genome-scale biochemical networks. The number of published methods for integration of transcriptomic data into constraint-based models has been rapidly increasing. So far the predictive capability...... of these methods has not been critically evaluated and compared. This work presents a survey of recently published methods that use transcript levels to try to improve metabolic flux predictions either by generating flux distributions or by creating context-specific models. A subset of these methods...... of the results to method-specific parameters is also evaluated, as well as their robustness to noise in the data. The results show that none of the methods outperforms the others for all cases. Also, it is observed that for many conditions, the predictions obtained by simple flux balance analysis using growth...
ParallelPC: an R package for efficient constraint based causal exploration
Le, Thuc Duy; Hoang, Tao; Li, Jiuyong; Liu, Lin; Hu, Shu
2015-01-01
Discovering causal relationships from data is the ultimate goal of many research areas. Constraint based causal exploration algorithms, such as PC, FCI, RFCI, PC-simple, IDA and Joint-IDA have achieved significant progress and have many applications. A common problem with these methods is the high computational complexity, which hinders their applications in real world high dimensional datasets, e.g gene expression datasets. In this paper, we present an R package, ParallelPC, that includes th...
Constraint-based modeling and kinetic analysis of the Smad dependent TGF-beta signaling pathway.
Directory of Open Access Journals (Sweden)
Zhike Zi
Full Text Available BACKGROUND: Investigation of dynamics and regulation of the TGF-beta signaling pathway is central to the understanding of complex cellular processes such as growth, apoptosis, and differentiation. In this study, we aim at using systems biology approach to provide dynamic analysis on this pathway. METHODOLOGY/PRINCIPAL FINDINGS: We proposed a constraint-based modeling method to build a comprehensive mathematical model for the Smad dependent TGF-beta signaling pathway by fitting the experimental data and incorporating the qualitative constraints from the experimental analysis. The performance of the model generated by constraint-based modeling method is significantly improved compared to the model obtained by only fitting the quantitative data. The model agrees well with the experimental analysis of TGF-beta pathway, such as the time course of nuclear phosphorylated Smad, the subcellular location of Smad and signal response of Smad phosphorylation to different doses of TGF-beta. CONCLUSIONS/SIGNIFICANCE: The simulation results indicate that the signal response to TGF-beta is regulated by the balance between clathrin dependent endocytosis and non-clathrin mediated endocytosis. This model is useful to be built upon as new precise experimental data are emerging. The constraint-based modeling method can also be applied to quantitative modeling of other signaling pathways.
CELL SCALE HOST-PATHOGEN MODELING: ANOTHER BRANCH IN THE EVOLUTION OF CONSTRAINT-BASED METHODS
Directory of Open Access Journals (Sweden)
Neema eJamshidi
2015-10-01
Full Text Available Constraint-based models have become popular methods for systems biology as they enable the integration of complex, disparate datasets in a biologically cohesive framework that also supports the description of biological processes in terms of basic physicochemical constraints and relationships. The scope, scale, and application of genome scale models have grown from single cell bacteria to multi-cellular interaction modeling; host-pathogen modeling represents one of these examples at the current horizon of constraint-based methods. There are now a small number of examples of host-pathogen constraint-based models in the literature, however there has not yet been a definitive description of the methodology required for the functional integration of genome scale models in order to generate simulation capable host-pathogen models. Herein we outline a systematic procedure to produce functional host-pathogen models, highlighting steps which require debugging and iterative revisions in order to successfully build a functional model. The construction of such models will enable the exploration of host-pathogen interactions by leveraging the growing wealth of omic data in order to better understand mechanism of infection and identify novel therapeutic strategies.
Probabilistic authenticated quantum dialogue
Hwang, Tzonelih; Luo, Yi-Ping
2015-12-01
This work proposes a probabilistic authenticated quantum dialogue (PAQD) based on Bell states with the following notable features. (1) In our proposed scheme, the dialogue is encoded in a probabilistic way, i.e., the same messages can be encoded into different quantum states, whereas in the state-of-the-art authenticated quantum dialogue (AQD), the dialogue is encoded in a deterministic way; (2) the pre-shared secret key between two communicants can be reused without any security loophole; (3) each dialogue in the proposed PAQD can be exchanged within only one-step quantum communication and one-step classical communication. However, in the state-of-the-art AQD protocols, both communicants have to run a QKD protocol for each dialogue and each dialogue requires multiple quantum as well as classical communicational steps; (4) nevertheless, the proposed scheme can resist the man-in-the-middle attack, the modification attack, and even other well-known attacks.
Probabilistic Event Categorization
Wiebe, J; Duan, L; Wiebe, Janyce; Bruce, Rebecca; Duan, Lei
1997-01-01
This paper describes the automation of a new text categorization task. The categories assigned in this task are more syntactically, semantically, and contextually complex than those typically assigned by fully automatic systems that process unseen test data. Our system for assigning these categories is a probabilistic classifier, developed with a recent method for formulating a probabilistic model from a predefined set of potential features. This paper focuses on feature selection. It presents a number of fully automatic features. It identifies and evaluates various approaches to organizing collocational properties into features, and presents the results of experiments covarying type of organization and type of property. We find that one organization is not best for all kinds of properties, so this is an experimental parameter worth investigating in NLP systems. In addition, the results suggest a way to take advantage of properties that are low frequency but strongly indicative of a class. The problems of rec...
Geothermal probabilistic cost study
Energy Technology Data Exchange (ETDEWEB)
Orren, L.H.; Ziman, G.M.; Jones, S.C.; Lee, T.K.; Noll, R.; Wilde, L.; Sadanand, V.
1981-08-01
A tool is presented to quantify the risks of geothermal projects, the Geothermal Probabilistic Cost Model (GPCM). The GPCM model is used to evaluate a geothermal reservoir for a binary-cycle electric plant at Heber, California. Three institutional aspects of the geothermal risk which can shift the risk among different agents are analyzed. The leasing of geothermal land, contracting between the producer and the user of the geothermal heat, and insurance against faulty performance are examined. (MHR)
On probabilistic Mandelbrot maps
Energy Technology Data Exchange (ETDEWEB)
Andreadis, Ioannis [International School of The Hague, Wijndaelerduin 1, 2554 BX The Hague (Netherlands)], E-mail: i.andreadis@ish-rijnlandslyceum.nl; Karakasidis, Theodoros E. [Department of Civil Engineering, University of Thessaly, GR-38334 Volos (Greece)], E-mail: thkarak@uth.gr
2009-11-15
In this work, we propose a definition for a probabilistic Mandelbrot map in order to extend and support the study initiated by Argyris et al. [Argyris J, Andreadis I, Karakasidis Th. On perturbations of the Mandelbrot map. Chaos, Solitons and Fractals 2000;11:1131-1136.] with regard to the numerical stability of the Mandelbrot and Julia set of the Mandelbrot map when subjected to noise.
Probabilistic reasoning for assembly-based 3D modeling
Chaudhuri, Siddhartha
2011-01-01
Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling is the identification of relevant components to be presented to the user. In this paper, we introduce a probabilistic reasoning approach to this problem. Given a repository of shapes, our approach learns a probabilistic graphical model that encodes semantic and geometric relationships among shape components. The probabilistic model is used to present components that are semantically and stylistically compatible with the 3D model that is being assembled. Our experiments indicate that the probabilistic model increases the relevance of presented components. © 2011 ACM.
Quantitative prediction of cellular metabolism with constraint-based models: the COBRA Toolbox v2.0.
Schellenberger, Jan; Que, Richard; Fleming, Ronan M T; Thiele, Ines; Orth, Jeffrey D; Feist, Adam M; Zielinski, Daniel C; Bordbar, Aarash; Lewis, Nathan E; Rahmanian, Sorena; Kang, Joseph; Hyduke, Daniel R; Palsson, Bernhard Ø
2011-08-04
Over the past decade, a growing community of researchers has emerged around the use of constraint-based reconstruction and analysis (COBRA) methods to simulate, analyze and predict a variety of metabolic phenotypes using genome-scale models. The COBRA Toolbox, a MATLAB package for implementing COBRA methods, was presented earlier. Here we present a substantial update of this in silico toolbox. Version 2.0 of the COBRA Toolbox expands the scope of computations by including in silico analysis methods developed since its original release. New functions include (i) network gap filling, (ii) (13)C analysis, (iii) metabolic engineering, (iv) omics-guided analysis and (v) visualization. As with the first version, the COBRA Toolbox reads and writes systems biology markup language-formatted models. In version 2.0, we improved performance, usability and the level of documentation. A suite of test scripts can now be used to learn the core functionality of the toolbox and validate results. This toolbox lowers the barrier of entry to use powerful COBRA methods.
Probabilistic Tsunami Hazard Analysis
Thio, H. K.; Ichinose, G. A.; Somerville, P. G.; Polet, J.
2006-12-01
The recent tsunami disaster caused by the 2004 Sumatra-Andaman earthquake has focused our attention to the hazard posed by large earthquakes that occur under water, in particular subduction zone earthquakes, and the tsunamis that they generate. Even though these kinds of events are rare, the very large loss of life and material destruction caused by this earthquake warrant a significant effort towards the mitigation of the tsunami hazard. For ground motion hazard, Probabilistic Seismic Hazard Analysis (PSHA) has become a standard practice in the evaluation and mitigation of seismic hazard to populations in particular with respect to structures, infrastructure and lifelines. Its ability to condense the complexities and variability of seismic activity into a manageable set of parameters greatly facilitates the design of effective seismic resistant buildings but also the planning of infrastructure projects. Probabilistic Tsunami Hazard Analysis (PTHA) achieves the same goal for hazards posed by tsunami. There are great advantages of implementing such a method to evaluate the total risk (seismic and tsunami) to coastal communities. The method that we have developed is based on the traditional PSHA and therefore completely consistent with standard seismic practice. Because of the strong dependence of tsunami wave heights on bathymetry, we use a full waveform tsunami waveform computation in lieu of attenuation relations that are common in PSHA. By pre-computing and storing the tsunami waveforms at points along the coast generated for sets of subfaults that comprise larger earthquake faults, we can efficiently synthesize tsunami waveforms for any slip distribution on those faults by summing the individual subfault tsunami waveforms (weighted by their slip). This efficiency make it feasible to use Green's function summation in lieu of attenuation relations to provide very accurate estimates of tsunami height for probabilistic calculations, where one typically computes
Exact and Approximate Probabilistic Symbolic Execution
Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem
2014-01-01
Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Lipschitz Parametrization of Probabilistic Graphical Models
Honorio, Jean
2012-01-01
We show that the log-likelihood of several probabilistic graphical models is Lipschitz continuous with respect to the lp-norm of the parameters. We discuss several implications of Lipschitz parametrization. We present an upper bound of the Kullback-Leibler divergence that allows understanding methods that penalize the lp-norm of differences of parameters as the minimization of that upper bound. The expected log-likelihood is lower bounded by the negative lp-norm, which allows understanding the generalization ability of probabilistic models. The exponential of the negative lp-norm is involved in the lower bound of the Bayes error rate, which shows that it is reasonable to use parameters as features in algorithms that rely on metric spaces (e.g. classification, dimensionality reduction, clustering). Our results do not rely on specific algorithms for learning the structure or parameters. We show preliminary results for activity recognition and temporal segmentation.
Probabilistic Dynamic Logic of Phenomena and Cognition
Vityaev, Evgenii; Perlovsky, Leonid; Smerdov, Stanislav
2011-01-01
The purpose of this paper is to develop further the main concepts of Phenomena Dynamic Logic (P-DL) and Cognitive Dynamic Logic (C-DL), presented in the previous paper. The specific character of these logics is in matching vagueness or fuzziness of similarity measures to the uncertainty of models. These logics are based on the following fundamental notions: generality relation, uncertainty relation, simplicity relation, similarity maximization problem with empirical content and enhancement (learning) operator. We develop these notions in terms of logic and probability and developed a Probabilistic Dynamic Logic of Phenomena and Cognition (P-DL-PC) that relates to the scope of probabilistic models of brain. In our research the effectiveness of suggested formalization is demonstrated by approximation of the expert model of breast cancer diagnostic decisions. The P-DL-PC logic was previously successfully applied to solving many practical tasks and also for modelling of some cognitive processes.
Probabilistic Modeling of Timber Structures
DEFF Research Database (Denmark)
Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro
2005-01-01
The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present pro...... probabilistic model for these basic properties is presented and possible refinements are given related to updating of the probabilistic model given new information, modeling of the spatial variation of strength properties and the duration of load effects.......The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...
The localization and correction of errors in models: a constraint-based approach
Piechowiak, S.; Rodriguez, J
2005-01-01
Model-based diagnosis, and constraint-based reasoning are well known generic paradigms for which the most difficult task lies in the construction of the models used. We consider the problem of localizing and correcting the errors in a model.We present a method to debug a model. To help the debugging task, we propose to use the model-base diagnosis solver. This method has been used in a real application of the development a model of a railway signalling system.
Storing and Querying Probabilistic XML Using a Probabilistic Relational DBMS
Hollander, E.S.; Keulen, van M.
2010-01-01
This work explores the feasibility of storing and querying probabilistic XML in a probabilistic relational database. Our approach is to adapt known techniques for mapping XML to relational data such that the possible worlds are preserved. We show that this approach can work for any XML-to-relational
Quantum probability for probabilists
Meyer, Paul-André
1993-01-01
In recent years, the classical theory of stochastic integration and stochastic differential equations has been extended to a non-commutative set-up to develop models for quantum noises. The author, a specialist of classical stochastic calculus and martingale theory, tries to provide anintroduction to this rapidly expanding field in a way which should be accessible to probabilists familiar with the Ito integral. It can also, on the other hand, provide a means of access to the methods of stochastic calculus for physicists familiar with Fock space analysis.
Constraint-Based Fuzzy Models for an Environment with Heterogeneous Information-Granules
Institute of Scientific and Technical Information of China (English)
K. Robert Lai; Yi-Yuan Chiang
2006-01-01
A novel framework for fuzzy modeling and model-based control design is described. Based on the theory of fuzzy constraint processing, the fuzzy model can be viewed as a generalized Takagi-Sugeno (TS) fuzzy model with fuzzy functional consequences. It uses multivariate antecedent membership functions obtained by granular-prototype fuzzy clustering methods and consequent fuzzy equations obtained by fuzzy regression techniques. Constrained optimization is used to estimate the consequent parameters, where the constraints are based on control-relevant a priori knowledge about the modeled process. The fuzzy-constraint-based approach provides the following features. 1) The knowledge base of a constraint-based fuzzy model can incorporate information with various types of fuzzy predicates. Consequently, it is easy to provide a fusion of different types of knowledge. The knowledge can be from data-driven approaches and/or from controlrelevant physical models. 2) A corresponding inference mechanism for the proposed model can deal with heterogeneous information granules. 3) Both numerical and linguistic inputs can be accepted for predicting new outputs.The proposed techniques are demonstrated by means of two examples: a nonlinear function-fitting problem and the well-known Box-Jenkins gas furnace process. The first example shows that the proposed model uses fewer fuzzy predicates achieving similar results with the traditional rule-based approach, while the second shows the performance can be significantly improved when the control-relevant constraints are considered.
A General Framework for Probabilistic Characterizing Formulae
DEFF Research Database (Denmark)
Sack, Joshua; Zhang, Lijun
2012-01-01
a general method for determining characteristic formulae of behavioral relations for probabilistic automata using fixed-point probability logics. We consider such behavioral relations as simulations and bisimulations, probabilistic bisimulations, probabilistic weak simulations, and probabilistic forward......Recently, a general framework on characteristic formulae was proposed by Aceto et al. It offers a simple theory that allows one to easily obtain characteristic formulae of many non-probabilistic behavioral relations. Our paper studies their techniques in a probabilistic setting. We provide...
2014-11-04
Final 3. DATES COVERED (From - To) 06 Mar 2013 – 05 Sep 2014 4. TITLE AND SUBTITLE (134067) Studies on a Novel Neuro -dynamic Model for...SUPPLEMENTARY NOTES 14. ABSTRACT The proposed study investigates a novel neuro -dynamic model which can learn to predictor regenerate fluctuated...Z39-18 Final/Annual/Midterm Report for AOARD 134067 “Studies on a Novel Neuro -dynamic Model for Prediction Learning of Fluctuated Data Streams
Probabilistic population aging
2017-01-01
We merge two methodologies, prospective measures of population aging and probabilistic population forecasts. We compare the speed of change and variability in forecasts of the old age dependency ratio and the prospective old age dependency ratio as well as the same comparison for the median age and the prospective median age. While conventional measures of population aging are computed on the basis of the number of years people have already lived, prospective measures are computed also taking account of the expected number of years they have left to live. Those remaining life expectancies change over time and differ from place to place. We compare the probabilistic distributions of the conventional and prospective measures using examples from China, Germany, Iran, and the United States. The changes over time and the variability of the prospective indicators are smaller than those that are observed in the conventional ones. A wide variety of new results emerge from the combination of methodologies. For example, for Germany, Iran, and the United States the likelihood that the prospective median age of the population in 2098 will be lower than it is today is close to 100 percent. PMID:28636675
Quantum probabilistic logic programming
Balu, Radhakrishnan
2015-05-01
We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.
Passage Retrieval: A Probabilistic Technique.
Melucci, Massimo
1998-01-01
Presents a probabilistic technique to retrieve passages from texts having a large size or heterogeneous semantic content. Results of experiments comparing the probabilistic technique to one based on a text segmentation algorithm revealed that the passage size affects passage retrieval performance; text organization and query generality may have an…
Probabilistic modeling of timber structures
DEFF Research Database (Denmark)
Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro
2007-01-01
The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet Publ...... is presented and possible refinements are given related to updating of the probabilistic model given new information, modeling of the spatial variation of strength properties and the duration of load effects.......The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet...... and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for timber components. The recommended probabilistic model for these basic properties...
Domain Knowledge Uncertainty and Probabilistic Parameter Constraints
Mao, Yi
2012-01-01
Incorporating domain knowledge into the modeling process is an effective way to improve learning accuracy. However, as it is provided by humans, domain knowledge can only be specified with some degree of uncertainty. We propose to explicitly model such uncertainty through probabilistic constraints over the parameter space. In contrast to hard parameter constraints, our approach is effective also when the domain knowledge is inaccurate and generally results in superior modeling accuracy. We focus on generative and conditional modeling where the parameters are assigned a Dirichlet or Gaussian prior and demonstrate the framework with experiments on both synthetic and real-world data.
Morin, Fanny; Courtecuisse, Hadrien; Reinertsen, Ingerid; Le Lann, Florian; Palombi, Olivier; Payan, Yohan; Chabanas, Matthieu
2017-08-01
During brain tumor surgery, planning and guidance are based on preoperative images which do not account for brain-shift. However, this deformation is a major source of error in image-guided neurosurgery and affects the accuracy of the procedure. In this paper, we present a constraint-based biomechanical simulation method to compensate for craniotomy-induced brain-shift that integrates the deformations of the blood vessels and cortical surface, using a single intraoperative ultrasound acquisition. Prior to surgery, a patient-specific biomechanical model is built from preoperative images, accounting for the vascular tree in the tumor region and brain soft tissues. Intraoperatively, a navigated ultrasound acquisition is performed directly in contact with the organ. Doppler and B-mode images are recorded simultaneously, enabling the extraction of the blood vessels and probe footprint, respectively. A constraint-based simulation is then executed to register the pre- and intraoperative vascular trees as well as the cortical surface with the probe footprint. Finally, preoperative images are updated to provide the surgeon with images corresponding to the current brain shape for navigation. The robustness of our method is first assessed using sparse and noisy synthetic data. In addition, quantitative results for five clinical cases are provided, first using landmarks set on blood vessels, then based on anatomical structures delineated in medical images. The average distances between paired vessels landmarks ranged from 3.51 to 7.32 (in mm) before compensation. With our method, on average 67% of the brain-shift is corrected (range [1.26; 2.33]) against 57% using one of the closest existing works (range [1.71; 2.84]). Finally, our method is proven to be fully compatible with a surgical workflow in terms of execution times and user interactions. In this paper, a new constraint-based biomechanical simulation method is proposed to compensate for craniotomy-induced brain
A Probabilistic Genome-Wide Gene Reading Frame Sequence Model
DEFF Research Database (Denmark)
Have, Christian Theil; Mørk, Søren
using the probabilistic logic programming language and machine learning system PRISM - a fast and efficient model prototyping environment, using bacterial gene finding performance as a benchmark of signal strength. The model is used to prune a set of gene predictions from an underlying gene finder...
Dynamic probabilistic CCA for analysis of affective behaviour
Nicolaou, Mihalis A.; Pavlovic, Vladimir; Pantic, Maja
2012-01-01
Fusing multiple continuous expert annotations is a crucial problem in machine learning and computer vision, particularly when dealing with uncertain and subjective tasks related to affective behaviour. Inspired by the concept of inferring shared and individual latent spaces in probabilistic CCA (PCC
Probabilistic quantum multimeters
Fiurasek, J; Fiurasek, Jaromir; Dusek, Miloslav
2004-01-01
We propose quantum devices that can realize probabilistically different projective measurements on a qubit. The desired measurement basis is selected by the quantum state of a program register. First we analyze the phase-covariant multimeters for a large class of program states, then the universal multimeters for a special choice of program. In both cases we start with deterministic but erroneous devices and then proceed to devices that never make a mistake but from time to time they give an inconclusive result. These multimeters are optimized (for a given type of a program) with respect to the minimum probability of inconclusive result. This concept is further generalized to the multimeters that minimize the error rate for a given probability of an inconclusive result (or vice versa). Finally, we propose a generalization for qudits.
Probabilistic retinal vessel segmentation
Wu, Chang-Hua; Agam, Gady
2007-03-01
Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.
Ship hull plate processing surface fairing with constraints based on B-spline
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The problem of ship hull plate processing surface fairing with constraints based on B-spline is solved in this paper. The algorithm for B-spline curve fairing with constraints is one of the most common methods in plane curve fairing. The algorithm can be applied to global and local curve fairing. It can constrain the perturbation range of the control points and the shape variation of the curve, and get a better fairing result in plane curves. In this paper, a new fairing algorithm with constraints for curves and surfaces in space is presented. Then this method is applied to the experiments of ship hull plate processing surface. Finally numerical results are obtained to show the efficiency of this method.
Coupling Phonology and Phonetics in a Constraint-Based Gestural Model
Walther, M; Walther, Markus; Kroeger, Bernd J.
1994-01-01
An implemented approach which couples a constraint-based phonology component with an articulatory speech synthesizer is proposed. Articulatory gestures ensure a tight connection between both components, as they comprise both physical-phonetic and phonological aspects. The phonological modelling of e.g. syllabification and phonological processes such as German final devoicing is expressed in the constraint logic programming language CUF. Extending CUF by arithmetic constraints allows the simultaneous description of both phonology and phonetics. Thus declarative lexicalist theories of grammar such as HPSG may be enriched up to the level of detailed phonetic realisation. Initial acoustic demonstrations show that our approach is in principle capable of synthesizing full utterances in a linguistically motivated fashion.
Chang, Su-Chao; Chou, Chi-Min
2012-11-01
The objective of this study was to determine empirically the role of constraint-based and dedication-based influences as drivers of the intention to continue using online shopping websites. Constraint-based influences consist of two variables: trust and perceived switching costs. Dedication-based influences consist of three variables: satisfaction, perceived usefulness, and trust. The current results indicate that both constraint-based and dedication-based influences are important drivers of the intention to continue using online shopping websites. The data also shows that trust has the strongest total effect on online shoppers' intention to continue using online shopping websites. In addition, the results indicate that the antecedents of constraint-based influences, technical bonds (e.g., perceived operational competence and perceived website interactivity) and social bonds (e.g., perceived relationship investment, community building, and intimacy) have indirect positive effects on the intention to continue using online shopping websites. Based on these findings, this research suggests that online shopping websites should build constraint-based and dedication-based influences to enhance user's continued online shopping behaviors simultaneously.
Probabilistic Decision Graphs - Combining Verification and AI Techniques for Probabilistic Inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2004-01-01
We adopt probabilistic decision graphs developed in the field of automated verification as a tool for probabilistic model representation and inference. We show that probabilistic inference has linear time complexity in the size of the probabilistic decision graph, that the smallest probabilistic ...
Probabilistic Decision Graphs - Combining Verification and AI Techniques for Probabilistic Inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2004-01-01
We adopt probabilistic decision graphs developed in the field of automated verification as a tool for probabilistic model representation and inference. We show that probabilistic inference has linear time complexity in the size of the probabilistic decision graph, that the smallest probabilistic ...
Dynamic shaping of dopamine signals during probabilistic Pavlovian conditioning.
Hart, Andrew S; Clark, Jeremy J; Phillips, Paul E M
2015-01-01
Cue- and reward-evoked phasic dopamine activity during Pavlovian and operant conditioning paradigms is well correlated with reward-prediction errors from formal reinforcement learning models, which feature teaching signals in the form of discrepancies between actual and expected reward outcomes. Additionally, in learning tasks where conditioned cues probabilistically predict rewards, dopamine neurons show sustained cue-evoked responses that are correlated with the variance of reward and are maximal to cues predicting rewards with a probability of 0.5. Therefore, it has been suggested that sustained dopamine activity after cue presentation encodes the uncertainty of impending reward delivery. In the current study we examined the acquisition and maintenance of these neural correlates using fast-scan cyclic voltammetry in rats implanted with carbon fiber electrodes in the nucleus accumbens core during probabilistic Pavlovian conditioning. The advantage of this technique is that we can sample from the same animal and recording location throughout learning with single trial resolution. We report that dopamine release in the nucleus accumbens core contains correlates of both expected value and variance. A quantitative analysis of these signals throughout learning, and during the ongoing updating process after learning in probabilistic conditions, demonstrates that these correlates are dynamically encoded during these phases. Peak CS-evoked responses are correlated with expected value and predominate during early learning while a variance-correlated sustained CS signal develops during the post-asymptotic updating phase.
14th International Probabilistic Workshop
Taerwe, Luc; Proske, Dirk
2017-01-01
This book presents the proceedings of the 14th International Probabilistic Workshop that was held in Ghent, Belgium in December 2016. Probabilistic methods are currently of crucial importance for research and developments in the field of engineering, which face challenges presented by new materials and technologies and rapidly changing societal needs and values. Contemporary needs related to, for example, performance-based design, service-life design, life-cycle analysis, product optimization, assessment of existing structures and structural robustness give rise to new developments as well as accurate and practically applicable probabilistic and statistical engineering methods to support these developments. These proceedings are a valuable resource for anyone interested in contemporary developments in the field of probabilistic engineering applications.
Common Difficulties with Probabilistic Reasoning.
Hope, Jack A.; Kelly, Ivan W.
1983-01-01
Several common errors reflecting difficulties in probabilistic reasoning are identified, relating to ambiguity, previous outcomes, sampling, unusual events, and estimating. Knowledge of these mistakes and interpretations may help mathematics teachers understand the thought processes of their students. (MNS)
CAD Parts-Based Assembly Modeling by Probabilistic Reasoning
Zhang, Kai-Ke
2016-04-11
Nowadays, increasing amount of parts and sub-assemblies are publicly available, which can be used directly for product development instead of creating from scratch. In this paper, we propose an interactive design framework for efficient and smart assembly modeling, in order to improve the design efficiency. Our approach is based on a probabilistic reasoning. Given a collection of industrial assemblies, we learn a probabilistic graphical model from the relationships between the parts of assemblies. Then in the modeling stage, this probabilistic model is used to suggest the most likely used parts compatible with the current assembly. Finally, the parts are assembled under certain geometric constraints. We demonstrate the effectiveness of our framework through a variety of assembly models produced by our prototype system. © 2015 IEEE.
Sun, Daqiang; van Erp, Theo G M; Thompson, Paul M; Bearden, Carrie E; Daley, Melita; Kushan, Leila; Hardt, Molly E; Nuechterlein, Keith H; Toga, Arthur W; Cannon, Tyrone D
2009-12-01
No objective diagnostic biomarkers or laboratory tests have yet been developed for psychotic illness. Magnetic resonance imaging (MRI) studies consistently find significant abnormalities in multiple brain structures in psychotic patients relative to healthy control subjects, but these abnormalities show substantial overlap with anatomic variation that is in the normal range and therefore nondiagnostic. Recently, efforts have been made to discriminate psychotic patients from healthy individuals using machine-learning-based pattern classification methods on MRI data. Three-dimensional cortical gray matter density (GMD) maps were generated for 36 patients with recent-onset psychosis and 36 sex- and age-matched control subjects using a cortical pattern matching method. Between-group differences in GMD were evaluated. Second, the sparse multinomial logistic regression classifier included in the Multivariate Pattern Analysis in Python machine-learning package was applied to the cortical GMD maps to discriminate psychotic patients from control subjects. Patients showed significantly lower GMD, particularly in prefrontal, cingulate, and lateral temporal brain regions. Pattern classification analysis achieved 86.1% accuracy in discriminating patients from controls using leave-one-out cross-validation. These results suggest that even at the early stage of illness, psychotic patients present distinct patterns of regional cortical gray matter changes that can be discriminated from the normal pattern. These findings indicate that we can detect complex patterns of brain abnormality in early stages of psychotic illness, which has critical implications for early identification and intervention in individuals at ultra-high risk for developing psychosis/schizophrenia.
Energy Technology Data Exchange (ETDEWEB)
Feist, AM; Nagarajan, H; Rotaru, AE; Tremblay, PL; Zhang, T; Nevin, KP; Lovley, DR; Zengler, K
2014-04-24
Geobacter species are of great interest for environmental and biotechnology applications as they can carry out direct electron transfer to insoluble metals or other microorganisms and have the ability to assimilate inorganic carbon. Here, we report on the capability and key enabling metabolic machinery of Geobacter metallireducens GS-15 to carry out CO2 fixation and direct electron transfer to iron. An updated metabolic reconstruction was generated, growth screens on targeted conditions of interest were performed, and constraint-based analysis was utilized to characterize and evaluate critical pathways and reactions in G. metallireducens. The novel capability of G. metallireducens to grow autotrophically with formate and Fe(III) was predicted and subsequently validated in vivo. Additionally, the energetic cost of transferring electrons to an external electron acceptor was determined through analysis of growth experiments carried out using three different electron acceptors (Fe(III), nitrate, and fumarate) by systematically isolating and examining different parts of the electron transport chain. The updated reconstruction will serve as a knowledgebase for understanding and engineering Geobacter and similar species. Author Summary The ability of microorganisms to exchange electrons directly with their environment has large implications for our knowledge of industrial and environmental processes. For decades, it has been known that microbes can use electrodes as electron acceptors in microbial fuel cell settings. Geobacter metallireducens has been one of the model organisms for characterizing microbe-electrode interactions as well as environmental processes such as bioremediation. Here, we significantly expand the knowledge of metabolism and energetics of this model organism by employing constraint-based metabolic modeling. Through this analysis, we build the metabolic pathways necessary for carbon fixation, a desirable property for industrial chemical production. We
Energy Technology Data Exchange (ETDEWEB)
Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L. [Univ. of Washington, Seattle, WA (United States). Dept. of Computer Science and Engineering
1997-04-01
This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and execute program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.
Feist, Adam M; Nagarajan, Harish; Rotaru, Amelia-Elena; Tremblay, Pier-Luc; Zhang, Tian; Nevin, Kelly P; Lovley, Derek R; Zengler, Karsten
2014-04-01
Geobacter species are of great interest for environmental and biotechnology applications as they can carry out direct electron transfer to insoluble metals or other microorganisms and have the ability to assimilate inorganic carbon. Here, we report on the capability and key enabling metabolic machinery of Geobacter metallireducens GS-15 to carry out CO2 fixation and direct electron transfer to iron. An updated metabolic reconstruction was generated, growth screens on targeted conditions of interest were performed, and constraint-based analysis was utilized to characterize and evaluate critical pathways and reactions in G. metallireducens. The novel capability of G. metallireducens to grow autotrophically with formate and Fe(III) was predicted and subsequently validated in vivo. Additionally, the energetic cost of transferring electrons to an external electron acceptor was determined through analysis of growth experiments carried out using three different electron acceptors (Fe(III), nitrate, and fumarate) by systematically isolating and examining different parts of the electron transport chain. The updated reconstruction will serve as a knowledgebase for understanding and engineering Geobacter and similar species.
Directory of Open Access Journals (Sweden)
Adam M Feist
2014-04-01
Full Text Available Geobacter species are of great interest for environmental and biotechnology applications as they can carry out direct electron transfer to insoluble metals or other microorganisms and have the ability to assimilate inorganic carbon. Here, we report on the capability and key enabling metabolic machinery of Geobacter metallireducens GS-15 to carry out CO2 fixation and direct electron transfer to iron. An updated metabolic reconstruction was generated, growth screens on targeted conditions of interest were performed, and constraint-based analysis was utilized to characterize and evaluate critical pathways and reactions in G. metallireducens. The novel capability of G. metallireducens to grow autotrophically with formate and Fe(III was predicted and subsequently validated in vivo. Additionally, the energetic cost of transferring electrons to an external electron acceptor was determined through analysis of growth experiments carried out using three different electron acceptors (Fe(III, nitrate, and fumarate by systematically isolating and examining different parts of the electron transport chain. The updated reconstruction will serve as a knowledgebase for understanding and engineering Geobacter and similar species.
Directory of Open Access Journals (Sweden)
Normadiah Mahiddin
2011-10-01
Full Text Available Healthcare service providers, including those involved in renal disease management, are concerned about the planning of their patients’ treatments. With efforts to automate the planning process, shortcomings are apparent due to the following reasons: (1 current plan representations or ontologies are too fine grained, and (2 current planning systems are often static. To address these issues, we introduce a planning system called Dynamic Personalized Planner (DP Planner which consists of: (1 a suitably light-weight and generic plan representation, and (2 a constraint-based dynamic planning engine. The plan representation is based on existing plan ontologies, and developed in XML. With the available plans, the planning engine focuses on personalizing pre-existing (or generic plans that can be dynamically changed as the condition of the patient changes over time. To illustrate our dynamic personalized planning approach, we present an example in renal disease management. In a comparative study, we observed that the resulting DP Planner possesses features that rival that of other planning systems, in particular that of Asgaard and O-Plan.
Combined Density-based and Constraint-based Algorithm for Clustering
Institute of Scientific and Technical Information of China (English)
CHEN Tung-shou; CHEN Rong-chang; LIN Chih-chiang; CHIU Yung-hsing
2006-01-01
We propose a new clustering algorithm that assists the researchers to quickly and accurately analyze data. We call this algorithm Combined Density-based and Constraint-based Algorithm (CDC). CDC consists of two phases. In the first phase, CDC employs the idea of density-based clustering algorithm to split the original data into a number of fragmented clusters. At the same time, CDC cuts off the noises and outliers. In the second phase, CDC employs the concept of K-means clustering algorithm to select a greater cluster to be the center. Then, the greater cluster merges some smaller clusters which satisfy some constraint rules.Due to the merged clusters around the center cluster, the clustering results show high accu racy. Moreover, CDC reduces the calculations and speeds up the clustering process. In this paper, the accuracy of CDC is evaluated and compared with those of K-means, hierarchical clustering, and the genetic clustering algorithm (GCA)proposed in 2004. Experimental results show that CDC has better performance.
Muscettola, Nicola; Smith, Steven S.
1996-01-01
This final report summarizes research performed under NASA contract NCC 2-531 toward generalization of constraint-based scheduling theories and techniques for application to space telescope observation scheduling problems. Our work into theories and techniques for solution of this class of problems has led to the development of the Heuristic Scheduling Testbed System (HSTS), a software system for integrated planning and scheduling. Within HSTS, planning and scheduling are treated as two complementary aspects of the more general process of constructing a feasible set of behaviors of a target system. We have validated the HSTS approach by applying it to the generation of observation schedules for the Hubble Space Telescope. This report summarizes the HSTS framework and its application to the Hubble Space Telescope domain. First, the HSTS software architecture is described, indicating (1) how the structure and dynamics of a system is modeled in HSTS, (2) how schedules are represented at multiple levels of abstraction, and (3) the problem solving machinery that is provided. Next, the specific scheduler developed within this software architecture for detailed management of Hubble Space Telescope operations is presented. Finally, experimental performance results are given that confirm the utility and practicality of the approach.
Characterizing the metabolism of Dehalococcoides with a constraint-based model.
Directory of Open Access Journals (Sweden)
M Ahsanul Islam
Full Text Available Dehalococcoides strains respire a wide variety of chloro-organic compounds and are important for the bioremediation of toxic, persistent, carcinogenic, and ubiquitous ground water pollutants. In order to better understand metabolism and optimize their application, we have developed a pan-genome-scale metabolic network and constraint-based metabolic model of Dehalococcoides. The pan-genome was constructed from publicly available complete genome sequences of Dehalococcoides sp. strain CBDB1, strain 195, strain BAV1, and strain VS. We found that Dehalococcoides pan-genome consisted of 1118 core genes (shared by all, 457 dispensable genes (shared by some, and 486 unique genes (found in only one genome. The model included 549 metabolic genes that encoded 356 proteins catalyzing 497 gene-associated model reactions. Of these 497 reactions, 477 were associated with core metabolic genes, 18 with dispensable genes, and 2 with unique genes. This study, in addition to analyzing the metabolism of an environmentally important phylogenetic group on a pan-genome scale, provides valuable insights into Dehalococcoides metabolic limitations, low growth yields, and energy conservation. The model also provides a framework to anchor and compare disparate experimental data, as well as to give insights on the physiological impact of "incomplete" pathways, such as the TCA-cycle, CO(2 fixation, and cobalamin biosynthesis pathways. The model, referred to as iAI549, highlights the specialized and highly conserved nature of Dehalococcoides metabolism, and suggests that evolution of Dehalococcoides species is driven by the electron acceptor availability.
Directory of Open Access Journals (Sweden)
Payal Singhal
2013-01-01
Full Text Available Grid computing is a collection of distributed resources interconnected by networks to provide a unified virtual computing resource view to the user. Grid computing has one important responsibility of resource management and techniques to allow the user to make optimal use of the job completion time and achieving good throughput. It is a big deal to design the efficient scheduler and is implementation. In this paper, the constraint based job and resource scheduling algorithm has been proposed. The four constraints are taken into account for grouping the jobs, i.e. Resource memory, Job memory, Job MI and the fourth constraint L2 cache are considered. Our implementation is to reduce the processing time efficiently by adding the fourth constraint L2 cache of the resource and is allocated to the resource for parallel computing. The L2 cache is a part of computer’s processor; it increases the performance of computer. It is smaller and extremely fast computer memory. The use of more constraint of the resource and job can increase the efficiency more. The work has been done in MATLAB using the parallel computing toolbox. All the constraints are calculated using different functions in MATLAB and are allocated to the resource based on it. The resource memory, Cache, job memory size and job MI are the key factors to group the jobs according to the available capability of the selected resource. The processing time is taken into account to analyze the feasibility of the algorithms.
Directory of Open Access Journals (Sweden)
Cotten Cameron
2013-01-01
Full Text Available Abstract Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass
Cotten, Cameron; Reed, Jennifer L
2013-01-30
Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the
Lentz, Tomas O; Kager, René W J
2015-09-01
Probabilistic phonotactic knowledge facilitates perception, but categorical phonotactic illegality can cause misperceptions, especially of non-native phoneme combinations. If misperceptions induced by first language (L1) knowledge filter second language input, access to second language (L2) probabilistic phonotactics is potentially blocked for L2 acquisition. The facilitatory effects of L2 probabilistic phonotactics and categorical filtering effects of L1 phonotactics were compared and contrasted in a series of cross-modal priming experiments. Dutch native listeners and L1 Spanish and Japanese learners of Dutch had to perform a lexical decision task on Dutch words that started with /sC/ clusters that were of different degrees of probabilistic wellformedness in Dutch but illegal in Spanish and Japanese. Versions of target words with Spanish illegality resolving epenthesis in the clusters primed the Spanish group, showing an L1 filter; a similar effect was not found for the Japanese group. In addition, words with wellformed /sC/ clusters were recognised faster, showing a positive effect on processing of probabilistic wellformedness. However, Spanish learners with higher proficiency were facilitated to a greater extent by wellformed but epenthesised clusters, showing that although probabilistic learning occurs in spite of the L1 filter, the acquired probabilistic knowledge is still affected by L1 categorical knowledge. Categorical phonotactic and probabilistic knowledge are of a different nature and interact in acquisition.
Probabilistic aspects of Wigner function
Usenko, C V
2004-01-01
The Wigner function of quantum systems is an effective instrument to construct the approximate classical description of the systems for which the classical approximation is possible. During the last time, the Wigner function formalism is applied as well to seek indications of specific quantum properties of quantum systems leading to impossibility of the classical approximation construction. Most of all, as such an indication the existence of negative values in Wigner function for specific states of the quantum system being studied is used. The existence of such values itself prejudices the probabilistic interpretation of the Wigner function, though for an arbitrary observable depending jointly on the coordinate and the momentum of the quantum system just the Wigner function gives an effective instrument to calculate the average value and the other statistical characteristics. In this paper probabilistic interpretation of the Wigner function based on coordination of theoretical-probabilistic definition of the ...
Implications of probabilistic risk assessment
Energy Technology Data Exchange (ETDEWEB)
Cullingford, M.C.; Shah, S.M.; Gittus, J.H. (eds.)
1987-01-01
Probabilistic risk assessment (PRA) is an analytical process that quantifies the likelihoods, consequences and associated uncertainties of the potential outcomes of postulated events. Starting with planned or normal operation, probabilistic risk assessment covers a wide range of potential accidents and considers the whole plant and the interactions of systems and human actions. Probabilistic risk assessment can be applied in safety decisions in design, licensing and operation of industrial facilities, particularly nuclear power plants. The proceedings include a review of PRA procedures, methods and technical issues in treating uncertainties, operating and licensing issues and future trends. Risk assessment for specific reactor types or components and specific risks (eg aircraft crashing onto a reactor) are used to illustrate the points raised. All 52 articles are indexed separately. (U.K.).
Quantum probabilistically cloning and computation
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this article we make a review on the usefulness of probabilistically cloning and present examples of quantum computation tasks for which quantum cloning offers an advantage which cannot be matched by any approach that does not resort to it.In these quantum computations,one needs to distribute quantum information contained in states about which we have some partial information.To perform quantum computations,one uses state-dependent probabilistic quantum cloning procedure to distribute quantum information in the middle of a quantum computation.And we discuss the achievable efficiencies and the efficient quantum logic network for probabilistic cloning the quantum states used in implementing quantum computation tasks for which cloning provides enhancement in performance.
Casey, J.; Ji, B.; Shaoie, S.; Mardinoglu, A.; Sarathi Sen, P.; Jahn, O.; Reda, K.; Leigh, J.; Follows, M. J.; Nielsen, J.; Karl, D. M.
2016-02-01
Representatives of the oligotrophic marine cyanobacterium Prochlorococcus marinus are the smallest free-living photosynthetic organisms, both in terms of physical size and genome size, yet are the most abundant photoautotrophic microbes in the oceans and profoundly influence global biogeochemical cycles. Physiological and regulatory control of nutrient and light stress has been observed in MED4 in culture and in its closely related `ecotype' eMED4 in the field, however its metabolism has not been investigated in detail. We present a genome-scale metabolic network reconstruction of the high-light adapted axenic strain MED4ax ("iJCMED4") for the quantitative analysis of a range of its metabolic phenotypes. The resulting structure is a proving ground for the incorporation of enzyme kinetics, biochemical and elemental compositional data, transcriptomic, proteomic, metabolomic, and fluxomic datasets which can be implemented within a constraint-based metabolic modeling environment. The iJCMED4 stoichiometric model consists of 523 metabolic genes encoding 787 reactions with 673 unique metabolites distributed in 5 sub-cellular compartments and is mass, charge, and thermodynamically balanced. Several variants of flux balance analysis were used to simulate growth and metabolic fluxes over the diel cycle, under various stress conditions (e.g., nitrogen, phosphorus, light), and within the framework of a global biogeochemical model (DARWIN). Model simulations accurately predicted growth rates in culture under a variety of defined medium compositions and there was close agreement of photosynthetic performance, biomass and energy yields and efficiencies, and transporter fluxes for iJCMED4 and culture experiments. In addition to a nearly optimal photosynthetic quotient and central carbon metabolism efficiency, MED4 has made dramatic alterations to redox and phosphorus metabolism across biosynthetic and intermediate pathways. We propose that reductions in phosphate reaction
The Computational Development of Reinforcement Learning during Adolescence
National Research Council Canada - National Science Library
Palminteri, Stefano; Kilford, Emma J; Coricelli, Giorgio; Blakemore, Sarah-Jayne
2016-01-01
.... Adolescents and adults carried out a novel reinforcement learning paradigm in which participants learned the association between cues and probabilistic outcomes, where the outcomes differed in valence...
HISTORY BASED PROBABILISTIC BACKOFF ALGORITHM
Directory of Open Access Journals (Sweden)
Narendran Rajagopalan
2012-01-01
Full Text Available Performance of Wireless LAN can be improved at each layer of the protocol stack with respect to energy efficiency. The Media Access Control layer is responsible for the key functions like access control and flow control. During contention, Backoff algorithm is used to gain access to the medium with minimum probability of collision. After studying different variations of back off algorithms that have been proposed, a new variant called History based Probabilistic Backoff Algorithm is proposed. Through mathematical analysis and simulation results using NS-2, it is seen that proposed History based Probabilistic Backoff algorithm performs better than Binary Exponential Backoff algorithm.
Probabilistic Design of Wind Turbines
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard
, new and more refined design methods must be developed. These methods can for instance be developed using probabilistic design where the uncertainties in all phases of the design life are taken into account. The main aim of the present thesis is to develop models for probabilistic design of wind....... The uncertainty related to the existing methods for estimating the loads during operation is assessed by applying these methods to a case where the load response is assumed to be Gaussian. In this case an approximate analytical solution exists for a statistical description of the extreme load response. In general...
Probabilistic methods in combinatorial analysis
Sachkov, Vladimir N
2014-01-01
This 1997 work explores the role of probabilistic methods for solving combinatorial problems. These methods not only provide the means of efficiently using such notions as characteristic and generating functions, the moment method and so on but also let us use the powerful technique of limit theorems. The basic objects under investigation are nonnegative matrices, partitions and mappings of finite sets, with special emphasis on permutations and graphs, and equivalence classes specified on sequences of finite length consisting of elements of partially ordered sets; these specify the probabilist
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Probabilistic Approach to Rough Set Theory
Institute of Scientific and Technical Information of China (English)
Wojciech Ziarko
2006-01-01
The presentation introduces the basic ideas and investigates the probabilistic approach to rough set theory. The major aspects of the probabilistic approach to rough set theory to be explored during the presentation are: the probabilistic view of the approximation space, the probabilistic approximations of sets, as expressed via variable precision and Bayesian rough set models, and probabilistic dependencies between sets and multi-valued attributes, as expressed by the absolute certainty gain and expected certainty gain measures, respectively. The probabilis-tic dependency measures allow for representation of subtle stochastic associations between attributes. They also allow for more comprehensive evaluation of rules computed from data and for computation of attribute reduct, core and significance factors in probabilistic decision tables. It will be shown that the probabilistic dependency measure-based attribute reduction techniques are also extendible to hierarchies of decision tables. The presentation will include computational examples to illustrate pre-sented concepts and to indicate possible practical applications.
Evaluating bacterial gene-finding HMM structures as probabilistic logic programs
DEFF Research Database (Denmark)
Mørk, Søren; Holmes, Ian
2012-01-01
, a probabilistic dialect of Prolog. Results: We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length......Motivation: Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM...... modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our...
Probabilistic Logic Programming under Answer Sets Semantics
Institute of Scientific and Technical Information of China (English)
王洁; 鞠实儿
2003-01-01
Although traditional logic programming languages provide powerful tools for knowledge representation, they cannot deal with uncertainty information (e. g. probabilistic information). In this paper, we propose a probabilistic logic programming language by introduce probability into a general logic programming language. The work combines 4-valued logic with probability. Conditional probability can be easily represented in a probabilistic logic program. The semantics of such a probabilistic logic program i...
Hierarchical probabilistic inference of cosmic shear
Schneider, Michael D; Marshall, Philip J; Dawson, William A; Meyers, Joshua; Bard, Deborah J; Lang, Dustin
2014-01-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the glo...
Probabilistic recognition of human faces from video
DEFF Research Database (Denmark)
Zhou, Saohua; Krüger, Volker; Chellappa, Rama
2003-01-01
Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video...... demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy...
Ignorability in Statistical and Probabilistic Inference
Jaeger, M
2011-01-01
When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed by maintaining this proper distinction are often prohibitive, one asks for conditions under which it can be safely ignored. Such conditions are given by the missing at random (mar) and coarsened at random (car) assumptions. In this paper we provide an in-depth analysis of several questions relating to mar/car assumptions. Main purpose of our study is to provide criteria by which one may evaluate whether a car assumption is reasonable for a particular data collecting or observational process. This question is complicated by the fact that several distinct versions of mar/car assumptions exist. We therefore first provide an overview over these different versions, in which we highlight the distinction between distributional an...
A PLUG AND PLAY ARCHITECTURE FOR PROBABILISTIC PROGRAMMING
2017-04-01
Language (PPL). Machine -learning solutions are often described in this way: a probabilistic story describes how data is generated (the model), and...first was the design and development of the theoretical foundations. We have published a paper [1] that describes our proposal in the International...instances are the ones mentioned here by name (we could name only some of the houses). */ sort House: 4, maryhouse, johnhouse, cathyhouse
Reasoning with probabilistic and deterministic graphical models exact algorithms
Dechter, Rina
2013-01-01
Graphical models (e.g., Bayesian and constraint networks, influence diagrams, and Markov decision processes) have become a central paradigm for knowledge representation and reasoning in both artificial intelligence and computer science in general. These models are used to perform many reasoning tasks, such as scheduling, planning and learning, diagnosis and prediction, design, hardware and software verification, and bioinformatics. These problems can be stated as the formal tasks of constraint satisfaction and satisfiability, combinatorial optimization, and probabilistic inference. It is well
A Probabilistic Ontology Development Methodology
2014-06-01
Model-Based Systems Engineering (MBSE) Methodologies," Seattle, 2008. [17] Jeffrey O. Grady, System Requirements Analysis. New York: McGraw-Hill, Inc...software. [Online]. http://www.norsys.com/index.html [26] Lise Getoor, Nir Friedman, Daphne Koller, Avi Pfeffer , and Ben Taskar, "Probabilistic
Probabilistic aspects of ocean waves
Battjes, J.A.
1977-01-01
Background material for a special lecture on probabilistic aspects of ocean waves for a seminar in Trondheim. It describes long term statistics and short term statistics. Statistical distributions of waves, directional spectra and frequency spectra. Sea state parameters, response peaks, encounter
Probabilistic aspects of ocean waves
Battjes, J.A.
1977-01-01
Background material for a special lecture on probabilistic aspects of ocean waves for a seminar in Trondheim. It describes long term statistics and short term statistics. Statistical distributions of waves, directional spectra and frequency spectra. Sea state parameters, response peaks, encounter pr
Sound Probabilistic #SAT with Projection
Directory of Open Access Journals (Sweden)
Vladimir Klebanov
2016-10-01
Full Text Available We present an improved method for a sound probabilistic estimation of the model count of a boolean formula under projection. The problem solved can be used to encode a variety of quantitative program analyses, such as concerning security of resource consumption. We implement the technique and discuss its application to quantifying information flow in programs.
Probabilistic localisation in repetitive environments
Vroegindeweij, Bastiaan A.; IJsselmuiden, Joris; Henten, van Eldert J.
2016-01-01
One of the problems in loose housing systems for laying hens is the laying of eggs on the floor, which need to be collected manually. In previous work, PoultryBot was presented to assist in this and other tasks. Here, probabilistic localisation with a particle filter is evaluated for use inside p
Probabilistic Structured Predictors
Vembu, Shankar; Boley, Mario
2012-01-01
We consider MAP estimators for structured prediction with exponential family models. In particular, we concentrate on the case that efficient algorithms for uniform sampling from the output space exist. We show that under this assumption (i) exact computation of the partition function remains a hard problem, and (ii) the partition function and the gradient of the log partition function can be approximated efficiently. Our main result is an approximation scheme for the partition function based on Markov Chain Monte Carlo theory. We also show that the efficient uniform sampling assumption holds in several application settings that are of importance in machine learning.
2009-11-17
probability information for each activity. [ Fritz and McIlraith, 2009] also considers probabilities analytically, in part, while calculating the robustness of...citeseer.ist.psu.edu/dearden03incremental.html. 2.1.2 Rina Dechter, Itay Meiri, and Judea Pearl . Temporal constraint networks. Artificial Intelligence, 49:61–95...on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997. 3.4 Christian Fritz and Shiela McIlraith
Quantitative prediction of cellular metabolism with constraint-based models: the COBRA Toolbox v2.0
2011-01-01
Over the past decade, a growing community of researchers has emerged around the use of COnstraint-Based Reconstruction and Analysis (COBRA) methods to simulate, analyze and predict a variety of metabolic phenotypes using genome-scale models. The COBRA Toolbox, a MATLAB package for implementing COBRA methods, was presented earlier. Here we present a significant update of this in silico ToolBox. Version 2.0 of the COBRA Toolbox expands the scope of computations by including in silico analysis m...
A probabilistic Hu-Washizu variational principle
Liu, W. K.; Belytschko, T.; Besterfield, G. H.
1987-01-01
A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.
Hsu, Anne S.; Chater, Nick; Vitanyi, Paul M. B.
2011-01-01
There is much debate over the degree to which language learning is governed by innate language-specific biases, or acquired through cognition-general principles. Here we examine the probabilistic language acquisition hypothesis on three levels: We outline a novel theoretical result showing that it is possible to learn the exact "generative model"…
Model Checking with Probabilistic Tabled Logic Programming
Gorlin, Andrey; Smolka, Scott A
2012-01-01
We present a formulation of the problem of probabilistic model checking as one of query evaluation over probabilistic logic programs. To the best of our knowledge, our formulation is the first of its kind, and it covers a rich class of probabilistic models and probabilistic temporal logics. The inference algorithms of existing probabilistic logic-programming systems are well defined only for queries with a finite number of explanations. This restriction prohibits the encoding of probabilistic model checkers, where explanations correspond to executions of the system being model checked. To overcome this restriction, we propose a more general inference algorithm that uses finite generative structures (similar to automata) to represent families of explanations. The inference algorithm computes the probability of a possibly infinite set of explanations directly from the finite generative structure. We have implemented our inference algorithm in XSB Prolog, and use this implementation to encode probabilistic model...
Probabilistic Design of Wind Turbines
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard
, new and more refined design methods must be developed. These methods can for instance be developed using probabilistic design where the uncertainties in all phases of the design life are taken into account. The main aim of the present thesis is to develop models for probabilistic design of wind......, the uncertainty is dependent on the method used for load extrapolation, the number of simulations and the distribution fitted to the extracted peaks. Another approach for estimating the uncertainty on the estimated load effects during operation is to use field measurements. A new method for load extrapolation......, which is based on average conditional exceedence rates, is applied to wind turbine response. The advantage of this method is that it can handle dependence in the response and use exceedence rates instead of extracted peaks which normally are more stable. The results show that the method estimates...
Probabilistic Design of Wind Turbines
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Toft, H.S.
2010-01-01
Probabilistic design of wind turbines requires definition of the structural elements to be included in the probabilistic basis: e.g., blades, tower, foundation; identification of important failure modes; careful stochastic modeling of the uncertain parameters; recommendations for target reliability....... It is described how uncertainties in wind turbine design related to computational models, statistical data from test specimens, results from a few full-scale tests and from prototype wind turbines can be accounted for using the Maximum Likelihood Method and a Bayesian approach. Assessment of the optimal...... reliability level by cost-benefit optimization is illustrated by an offshore wind turbine example. Uncertainty modeling is illustrated by an example where physical, statistical and model uncertainties are estimated....
Probabilistic Design of Wind Turbines
Directory of Open Access Journals (Sweden)
Henrik S. Toft
2010-02-01
Full Text Available Probabilistic design of wind turbines requires definition of the structural elements to be included in the probabilistic basis: e.g., blades, tower, foundation; identification of important failure modes; careful stochastic modeling of the uncertain parameters; recommendations for target reliability levels and recommendation for consideration of system aspects. The uncertainties are characterized as aleatoric (physical uncertainty or epistemic (statistical, measurement and model uncertainties. Methods for uncertainty modeling consistent with methods for estimating the reliability are described. It is described how uncertainties in wind turbine design related to computational models, statistical data from test specimens, results from a few full-scale tests and from prototype wind turbines can be accounted for using the Maximum Likelihood Method and a Bayesian approach. Assessment of the optimal reliability level by cost-benefit optimization is illustrated by an offshore wind turbine example. Uncertainty modeling is illustrated by an example where physical, statistical and model uncertainties are estimated.
Modified Claus process probabilistic model
Energy Technology Data Exchange (ETDEWEB)
Larraz Mora, R. [Chemical Engineering Dept., Univ. of La Laguna (Spain)
2006-03-15
A model is proposed for the simulation of an industrial Claus unit with a straight-through configuration and two catalytic reactors. Process plant design evaluations based on deterministic calculations does not take into account the uncertainties that are associated with the different input variables. A probabilistic simulation method was applied in the Claus model to obtain an impression of how some of these inaccuracies influences plant performance. (orig.)
Probabilistic Cloning and Quantum Computation
Institute of Scientific and Technical Information of China (English)
GAO Ting; YAN Feng-Li; WANG Zhi-Xi
2004-01-01
@@ We discuss the usefulness of quantum cloning and present examples of quantum computation tasks for which the cloning offers an advantage which cannot be matched by any approach that does not resort to quantum cloning.In these quantum computations, we need to distribute quantum information contained in the states about which we have some partial information. To perform quantum computations, we use a state-dependent probabilistic quantum cloning procedure to distribute quantum information in the middle of a quantum computation.
Probabilistic analysis and related topics
Bharucha-Reid, A T
1983-01-01
Probabilistic Analysis and Related Topics, Volume 3 focuses on the continuity, integrability, and differentiability of random functions, including operator theory, measure theory, and functional and numerical analysis. The selection first offers information on the qualitative theory of stochastic systems and Langevin equations with multiplicative noise. Discussions focus on phase-space evolution via direct integration, phase-space evolution, linear and nonlinear systems, linearization, and generalizations. The text then ponders on the stability theory of stochastic difference systems and Marko
Probabilistic methods for rotordynamics analysis
Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.
1991-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.
Probabilistic analysis and related topics
Bharucha-Reid, A T
1979-01-01
Probabilistic Analysis and Related Topics, Volume 2 focuses on the integrability, continuity, and differentiability of random functions, as well as functional analysis, measure theory, operator theory, and numerical analysis.The selection first offers information on the optimal control of stochastic systems and Gleason measures. Discussions focus on convergence of Gleason measures, random Gleason measures, orthogonally scattered Gleason measures, existence of optimal controls without feedback, random necessary conditions, and Gleason measures in tensor products. The text then elaborates on an
Probabilistic interpretation of resonant states
Indian Academy of Sciences (India)
Naomichi Hatano; Tatsuro Kawamoto; Joshua Feinberg
2009-09-01
We provide probabilistic interpretation of resonant states. We do this by showing that the integral of the modulus square of resonance wave functions (i.e., the conventional norm) over a properly expanding spatial domain is independent of time, and therefore leads to probability conservation. This is in contrast with the conventional employment of a bi-orthogonal basis that precludes probabilistic interpretation, since wave functions of resonant states diverge exponentially in space. On the other hand, resonant states decay exponentially in time, because momentum leaks out of the central scattering area. This momentum leakage is also the reason for the spatial exponential divergence of resonant state. It is by combining the opposite temporal and spatial behaviours of resonant states that we arrive at our probabilistic interpretation of these states. The physical need to normalize resonant wave functions over an expanding spatial domain arises because particles leak out of the region which contains the potential range and escape to infinity, and one has to include them in the total count of particles.
Probabilistic hypergraph based hash codes for social image search
Institute of Scientific and Technical Information of China (English)
Yi XIE; Hui-min YU; Roland HU
2014-01-01
With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.
Inherently stochastic spiking neurons for probabilistic neural computation
Al-Shedivat, Maruan
2015-04-01
Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.
Fault Localization for Java Programs using Probabilistic Program Dependence Graph
Askarunisa, A; Babu, B Giri
2012-01-01
Fault localization is a process to find the location of faults. It determines the root cause of the failure. It identifies the causes of abnormal behaviour of a faulty program. It identifies exactly where the bugs are. Existing fault localization techniques are Slice based technique, Program- Spectrum based Technique, Statistics Based Technique, Program State Based Technique, Machine learning based Technique and Similarity Based Technique. In the proposed method Model Based Fault Localization Technique is used, which is called Probabilistic Program Dependence Graph . Probabilistic Program Dependence Graph (PPDG) is an innovative model that scans the internal behaviour of the project. PPDG construction is enhanced by Program Dependence Graph (PDG). PDG is achieved by the Control Flow Graph (CFG). The PPDG construction augments the structural dependences represented by a program dependence graph with estimates of statistical dependences between node states, which are computed from the test set. The PPDG is base...
DEFF Research Database (Denmark)
Machado, Daniel; Herrgard, Markus; Rocha, Isabel
2016-01-01
Genome-scale metabolic reconstructions are currently available for hundreds of organisms. Constraint-based modeling enables the analysis of the phenotypic landscape of these organisms, predicting the response to genetic and environmental perturbations. However, since constraint-based models can...... level by explicitly accounting for the individual fluxes of enzymes (and subunits) encoded by each gene. We show how this can be applied to different kinds of constraint-based analysis: flux distribution prediction, gene essentiality analysis, random flux sampling, elementary mode analysis...... only describe the metabolic phenotype at the reaction level, understanding the mechanistic link between genotype and phenotype is still hampered by the complexity of gene-protein-reaction associations. We implement a model transformation that enables constraint-based methods to be applied at the gene...
Partial Planning Reinforcement Learning
2012-08-31
This project explored several problems in the areas of reinforcement learning , probabilistic planning, and transfer learning. In particular, it...studied Bayesian Optimization for model-based and model-free reinforcement learning , transfer in the context of model-free reinforcement learning based on
Novel Intrusion Detection using Probabilistic Neural Network and Adaptive Boosting
Tran, Tich Phuoc; Tran, Dat; Nguyen, Cuong Duc
2009-01-01
This article applies Machine Learning techniques to solve Intrusion Detection problems within computer networks. Due to complex and dynamic nature of computer networks and hacking techniques, detecting malicious activities remains a challenging task for security experts, that is, currently available defense systems suffer from low detection capability and high number of false alarms. To overcome such performance limitations, we propose a novel Machine Learning algorithm, namely Boosted Subspace Probabilistic Neural Network (BSPNN), which integrates an adaptive boosting technique and a semi parametric neural network to obtain good tradeoff between accuracy and generality. As the result, learning bias and generalization variance can be significantly minimized. Substantial experiments on KDD 99 intrusion benchmark indicate that our model outperforms other state of the art learning algorithms, with significantly improved detection accuracy, minimal false alarms and relatively small computational complexity.
Probabilistic Planning with Imperfect Sensing Actions Using Hybrid Probabilistic Logic Programs
Saad, Emad
Effective planning in uncertain environment is important to agents and multi-agents systems. In this paper, we introduce a new logic based approach to probabilistic contingent planning (probabilistic planning with imperfect sensing actions), by relating probabilistic contingent planning to normal hybrid probabilistic logic programs with probabilistic answer set semantics [24]. We show that any probabilistic contingent planning problem can be encoded as a normal hybrid probabilistic logic program. We formally prove the correctness of our approach. Moreover, we show that the complexity of finding a probabilistic contingent plan in our approach is NP-complete. In addition, we show that any probabilistic contingent planning problem, \\cal PP, can be encoded as a classical normal logic program with answer set semantics, whose answer sets corresponds to valid trajectories in \\cal PP. We show that probabilistic contingent planning problems can be encoded as SAT problems. We present a new high level probabilistic action description language that allows the representation of sensing actions with probabilistic outcomes.
Probabilistic Flood Defence Assessment Tools
Directory of Open Access Journals (Sweden)
Slomp Robert
2016-01-01
institutions managing flood the defences, and not by just a small number of experts in probabilistic assessment. Therefore, data management and use of software are main issues that have been covered in courses and training in 2016 and 2017. All in all, this is the largest change in the assessment of Dutch flood defences since 1996. In 1996 probabilistic techniques were first introduced to determine hydraulic boundary conditions (water levels and waves (wave height, wave period and direction for different return periods. To simplify the process, the assessment continues to consist of a three-step approach, moving from simple decision rules, to the methods for semi-probabilistic assessment, and finally to a fully probabilistic analysis to compare the strength of flood defences with the hydraulic loads. The formal assessment results are thus mainly based on the fully probabilistic analysis and the ultimate limit state of the strength of a flood defence. For complex flood defences, additional models and software were developed. The current Hydra software suite (for policy analysis, formal flood defence assessment and design will be replaced by the model Ringtoets. New stand-alone software has been developed for revetments, geotechnical analysis and slope stability of the foreshore. Design software and policy analysis software, including the Delta model, will be updated in 2018. A fully probabilistic method results in more precise assessments and more transparency in the process of assessment and reconstruction of flood defences. This is of increasing importance, as large-scale infrastructural projects in a highly urbanized environment are increasingly subject to political and societal pressure to add additional features. For this reason, it is of increasing importance to be able to determine which new feature really adds to flood protection, to quantify how much its adds to the level of flood protection and to evaluate if it is really worthwhile. Please note: The Netherlands
-Boundedness and -Compactness in Finite Dimensional Probabilistic Normed Spaces
Indian Academy of Sciences (India)
Reza Saadati; Massoud Amini
2005-11-01
In this paper, we prove that in a finite dimensional probabilistic normed space, every two probabilistic norms are equivalent and we study the notion of -compactness and -boundedness in probabilistic normed spaces.
Benaloh's Dense Probabilistic Encryption Revisited
Fousse, Laurent; Alnuaimi, Mohamed
2010-01-01
In 1994, Josh Benaloh proposed a probabilistic homomorphic encryption scheme, enhancing the poor expansion factor provided by Goldwasser and Micali's scheme. Since then, numerous papers have taken advantage of Benaloh's homomorphic encryption function, including voting schemes, non-interactive verifiable secret sharing, online poker... In this paper we show that the original description of the scheme is incorrect, possibly resulting in ambiguous decryption of ciphertexts. We give a corrected description of the scheme, provide a complete proof of correctness and an analysis of the probability of failure in the initial description.
Probabilistic Analysis of Crack Width
Directory of Open Access Journals (Sweden)
J. Marková
2000-01-01
Full Text Available Probabilistic analysis of crack width of a reinforced concrete element is based on the formulas accepted in Eurocode 2 and European Model Code 90. Obtained values of reliability index b seem to be satisfactory for the reinforced concrete slab that fulfils requirements for the crack width specified in Eurocode 2. However, the reliability of the slab seems to be insufficient when the European Model Code 90 is considered; reliability index is less than recommended value 1.5 for serviceability limit states indicated in Eurocode 1. Analysis of sensitivity factors of basic variables enables to find out variables significantly affecting the total crack width.
Probabilistic analysis of tsunami hazards
Geist, E.L.; Parsons, T.
2006-01-01
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).
Why do probabilistic finite element analysis ?
Thacker, B H
2008-01-01
The intention of this book is to provide an introduction to performing probabilistic finite element analysis. As a short guideline, the objective is to inform the reader of the use, benefits and issues associated with performing probabilistic finite element analysis without excessive theory or mathematical detail.
Function Approximation Using Probabilistic Fuzzy Systems
J.H. van den Berg (Jan); U. Kaymak (Uzay); R.J. Almeida e Santos Nogueira (Rui Jorge)
2011-01-01
textabstractWe consider function approximation by fuzzy systems. Fuzzy systems are typically used for approximating deterministic functions, in which the stochastic uncertainty is ignored. We propose probabilistic fuzzy systems in which the probabilistic nature of uncertainty is taken into account.
Probabilistic Remaining Useful Life Prediction of Composite Aircraft Components Project
National Aeronautics and Space Administration — A Probabilistic Fatigue Damage Assessment Network (PFDAN) toolkit for Abaqus will be developed for probabilistic life management of a laminated composite structure...
Semantics of sub-probabilistic programs
Institute of Scientific and Technical Information of China (English)
Yixing CHEN; Hengyang WU
2008-01-01
The aim of this paper is to extend the probabil-istic choice in probabilistic programs to sub-probabilistic choice, i.e., of the form (p)P (q) Q where p + q ≤ 1. It means that program P is executed with probability p and program Q is executed with probability q. Then, start-ing from an initial state, the execution of a sub-probabil-istic program results in a sub-probability distribution. This paper presents two equivalent semantics for a sub-probabilistic while-programming language. One of these interprets programs as sub-probabilistic distributions on state spaces via denotational semantics. The other inter-prets programs as bounded expectation transformers via wp-semantics. This paper proposes an axiomatic systems for total logic, and proves its soundness and completeness in a classical pattern on the structure of programs.
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
Energy Technology Data Exchange (ETDEWEB)
Schneider, Michael D.; Dawson, William A. [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Hogg, David W. [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States); Marshall, Philip J.; Bard, Deborah J. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Meyers, Joshua [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94035 (United States); Lang, Dustin, E-mail: schneider42@llnl.gov [Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States)
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.
Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge.
Lau, Jey Han; Clark, Alexander; Lappin, Shalom
2016-10-12
The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large-scale experiments using crowd-sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state-of-the-art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic.
Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.
Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M
2016-06-24
Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.
Dynamic probabilistic CCA for analysis of affective behaviour and fusion of continuous annotations
Nicolaou, Mihalis A.; Pavlovic, Vladimir; Pantic, Maja
2014-01-01
Fusing multiple continuous expert annotations is a crucial problem in machine learning and computer vision, particularly when dealing with uncertain and subjective tasks related to affective behavior. Inspired by the concept of inferring shared and individual latent spaces in Probabilistic Canonical
Ip, Kuhn; Donoghue, Neil; Kim, Min Kyung; Lun, Desmond S
2014-10-01
Constraint-based modeling has been shown, in many instances, to be useful for metabolic engineering by allowing the prediction of the metabolic phenotype resulting from genetic manipulations. But the basic premise of constraint-based modeling-that of applying constraints to preclude certain behaviors-only makes sense for certain genetic manipulations (such as knockouts and knockdowns). In particular, when genes (such as those associated with a heterologous pathway) are introduced under artificial control, it is unclear how to predict the correct behavior. In this paper, we introduce a modeling method that we call proportional flux forcing (PFF) to model artificially induced enzymatic genes. The model modifications introduced by PFF can be transformed into a set of simple mass balance constraints, which allows computational methods for strain optimization based on flux balance analysis (FBA) to be utilized. We applied PFF to the metabolic engineering of Escherichia coli (E. coli) for free fatty acid (FFA) production-a metabolic engineering problem that has attracted significant attention because FFAs are a precursor to liquid transportation fuels such as biodiesel and biogasoline. We show that PFF used in conjunction with FBA-based computational strain optimization methods can yield non-obvious genetic manipulation strategies that significantly increase FFA production in E. coli. The two mutant strains constructed and successfully tested in this work had peak fatty acid (FA) yields of 0.050 g FA/g carbon source (17.4% theoretical yield) and 0.035 g FA/g carbon source (12.3% theoretical yield) when they were grown using a mixed carbon source of glucose and casamino acids in a ratio of 2-to-1. These yields represent increases of 5.4- and 3.8-fold, respectively, over the baseline strain.
Probabilistic Fatigue Damage Program (FATIG)
Michalopoulos, Constantine
2012-01-01
FATIG computes fatigue damage/fatigue life using the stress rms (root mean square) value, the total number of cycles, and S-N curve parameters. The damage is computed by the following methods: (a) traditional method using Miner s rule with stress cycles determined from a Rayleigh distribution up to 3*sigma; and (b) classical fatigue damage formula involving the Gamma function, which is derived from the integral version of Miner's rule. The integration is carried out over all stress amplitudes. This software solves the problem of probabilistic fatigue damage using the integral form of the Palmgren-Miner rule. The software computes fatigue life using an approach involving all stress amplitudes, up to N*sigma, as specified by the user. It can be used in the design of structural components subjected to random dynamic loading, or by any stress analyst with minimal training for fatigue life estimates of structural components.
The Complexity of Probabilistic Lobbying
Erdélyi, Gábor; Goldsmith, Judy; Mattei, Nicholas; Raible, Daniel; Rothe, Jörg
2009-01-01
We propose various models for lobbying in a probabilistic environment, in which an actor (called "The Lobby") seeks to influence the voters' preferences of voting for or against multiple issues when the voters' preferences are represented in terms of probabilities. In particular, we provide two evaluation criteria and three bribery methods to formally describe these models, and we consider the resulting forms of lobbying with and without issue weighting. We provide a formal analysis for these problems of lobbying in a stochastic environment, and determine their classical and parameterized complexity depending on the given bribery/evaluation criteria. Specifically, we show that some of these problems can be solved in polynomial time, some are NP-complete but fixed-parameter tractable, and some are W[2]-complete. Finally, we provide (in)approximability results.
Probabilistic simulation of fire scenarios
Energy Technology Data Exchange (ETDEWEB)
Hostikka, Simo E-mail: simo.bostikka@vtt.fi; Keski-Rahkonen, Olavi
2003-10-01
A risk analysis tool is developed for computation of the distributions of fire model output variables. The tool, called Probabilistic Fire Simulator (PFS), combines Monte Carlo simulation and CFAST, a two-zone fire model. In this work, the tool is used to estimate the failure probability of redundant cables in a cable tunnel fire, and the failure and smoke filling probabilities in an electronics room during an electronics cabinet fire. Sensitivity of the output variables to the input variables is calculated in terms of the rank order correlations. The use of the rank order correlations allows the user to identify both modelling parameters and actual facility properties that have the most influence on the results. Various steps of the simulation process, i.e. data collection, generation of the input distributions, modelling assumptions, definition of the output variables and the actual simulation, are described.
Probabilistic direct counterfactual quantum communication
Zhang, Sheng
2017-02-01
It is striking that the quantum Zeno effect can be used to launch a direct counterfactual communication between two spatially separated parties, Alice and Bob. So far, existing protocols of this type only provide a deterministic counterfactual communication service. However, this counterfactuality should be payed at a price. Firstly, the transmission time is much longer than a classical transmission costs. Secondly, the chained-cycle structure makes them more sensitive to channel noises. Here, we extend the idea of counterfactual communication, and present a probabilistic-counterfactual quantum communication protocol, which is proved to have advantages over the deterministic ones. Moreover, the presented protocol could evolve to a deterministic one solely by adjusting the parameters of the beam splitters. Project supported by the National Natural Science Foundation of China (Grant No. 61300203).
Probabilistic cloning with supplementary information
Azuma, K; Koashi, M; Imoto, N; Azuma, Koji; Shimamura, Junichi; Koashi, Masato; Imoto, Nobuyuki
2005-01-01
We consider probabilistic cloning of a state chosen from a mutually nonorthogonal set of pure states, with the help of a party holding supplementary information in the form of pure states. When the number of states is two, we show that the best efficiency of producing m copies is always achieved by a two-step protocol in which the helping party first attempts to produce m-1 copies from the supplementary state, and if it fails, then the original state is used to produce m copies. On the other hand, when the number of states exceeds two, the best efficiency is not always achieved by such a protocol. We give examples in which the best efficiency is not achieved even if we allow any amount of one-way classical communication from the helping party.
Probabilistic Modeling of Graded Timber Material Properties
DEFF Research Database (Denmark)
Faber, M. H.; Köhler, J.; Sørensen, John Dalsgaard
2004-01-01
The probabilistic modeling of timber material characteristics is considered with special emphasis to the modeling of the effect of different quality control and selection procedures used as means for quality grading in the production line. It is shown how statistical models may be established...... an important role in the overall probabilistic modeling. Therefore a scheme for estimating the parameters of probability distribution parameters focusing on the tail behavior has been established using a censored Maximum Likelihood estimation technique. The proposed probabilistic models have been formulated...
Probabilistic UML statecharts for specification and verification: a case study
Jansen, D.N.; Jürjens, J.; Cengarle, M.V.; Fernandez, E.B.; Rumpe, B.; Sander, R.
2002-01-01
This paper introduces a probabilistic extension of UML statecharts. A requirements-level semantics of statecharts is extended to include probabilistic elements. Desired properties for probabilistic statecharts are expressed in the probabilistic logic PCTL, and verified using the model checker Prism.
Energy Technology Data Exchange (ETDEWEB)
Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George
2017-04-21
The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.
Probabilistic analysis of linear elastic cracked structures
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents a probabilistic methodology for linear fracture mechanics analysis of cracked structures. The main focus is on probabilistic aspect related to the nature of crack in material. The methodology involves finite element analysis; statistical models for uncertainty in material properties, crack size, fracture toughness and loads; and standard reliability methods for evaluating probabilistic characteristics of linear elastic fracture parameter. The uncertainty in the crack size can have a significant effect on the probability of failure, particularly when the crack size has a large coefficient of variation. Numerical example is presented to show that probabilistic methodology based on Monte Carlo simulation provides accurate estimates of failure probability for use in linear elastic fracture mechanics.
Structural reliability codes for probabilistic design
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
1997-01-01
difficulties of ambiguity and definition show up when attempting to make the transition from a given authorized partial safety factor code to a superior probabilistic code. For any chosen probabilistic code format there is a considerable variation of the reliability level over the set of structures defined...... considerable variation of the reliability measure as defined by a specific probabilistic code format. Decision theoretical principles are applied to get guidance about which of these different reliability levels of existing practice to choose as target reliability level. Moreover, it is shown that the chosen...... probabilistic code format has not only strong influence on the formal reliability measure, but also on the formal cost of failure to be associated if a design made to the target reliability level is considered to be optimal. In fact, the formal cost of failure can be different by several orders of size for two...
Revising incompletely specified convex probabilistic belief bases
CSIR Research Space (South Africa)
Rens, G
2016-04-01
Full Text Available International Workshop on Non-Monotonic Reasoning (NMR), 22-24 April 2016, Cape Town, South Africa Revising Incompletely Specified Convex Probabilistic Belief Bases Gavin Rens CAIR_, University of KwaZulu-Natal, School of Mathematics, Statistics...
A logic for inductive probabilistic reasoning
DEFF Research Database (Denmark)
Jaeger, Manfred
2005-01-01
Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from '70% of As are Bs" and "a is an A" infer...... that a is a B with probability 0.7. Direct inference is generalized by Jeffrey's rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system acting in a complex environment may have...... to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e., by inductive probabilistic reasoning. In this paper a formal framework...
Non-unitary probabilistic quantum computing
Gingrich, Robert M.; Williams, Colin P.
2004-01-01
We present a method for designing quantum circuits that perform non-unitary quantum computations on n-qubit states probabilistically, and give analytic expressions for the success probability and fidelity.
Do probabilistic forecasts lead to better decisions?
Ramos, M. H.; van Andel, S. J.; Pappenberger, F.
2013-06-01
The last decade has seen growing research in producing probabilistic hydro-meteorological forecasts and increasing their reliability. This followed the promise that, supplied with information about uncertainty, people would take better risk-based decisions. In recent years, therefore, research and operational developments have also started focusing attention on ways of communicating the probabilistic forecasts to decision-makers. Communicating probabilistic forecasts includes preparing tools and products for visualisation, but also requires understanding how decision-makers perceive and use uncertainty information in real time. At the EGU General Assembly 2012, we conducted a laboratory-style experiment in which several cases of flood forecasts and a choice of actions to take were presented as part of a game to participants, who acted as decision-makers. Answers were collected and analysed. In this paper, we present the results of this exercise and discuss if we indeed make better decisions on the basis of probabilistic forecasts.
Safety Verification for Probabilistic Hybrid Systems
DEFF Research Database (Denmark)
Zhang, Lijun; She, Zhikun; Ratschan, Stefan;
2010-01-01
The interplay of random phenomena and continuous real-time control deserves increased attention for instance in wireless sensing and control applications. Safety verification for such systems thus needs to consider probabilistic variations of systems with hybrid dynamics. In safety verification...... hybrid systems and develop a general abstraction technique for verifying probabilistic safety problems. This gives rise to the first mechanisable technique that can, in practice, formally verify safety properties of non-trivial continuous-time stochastic hybrid systems-without resorting to point...... of classical hybrid systems we are interested in whether a certain set of unsafe system states can be reached from a set of initial states. In the probabilistic setting, we may ask instead whether the probability of reaching unsafe states is below some given threshold. In this paper, we consider probabilistic...
Safety Verification for Probabilistic Hybrid Systems
DEFF Research Database (Denmark)
Zhang, Lijun; She, Zhikun; Ratschan, Stefan;
2012-01-01
The interplay of random phenomena and continuous dynamics deserves increased attention, especially in the context of wireless sensing and control applications. Safety verification for such systems thus needs to consider probabilistic variants of systems with hybrid dynamics. In safety verification...... hybrid systems and develop a general abstraction technique for verifying probabilistic safety problems. This gives rise to the first mechanisable technique that can, in practice, formally verify safety properties of non-trivial continuous-time stochastic hybrid systems. Moreover, being based...... of classical hybrid systems, we are interested in whether a certain set of unsafe system states can be reached from a set of initial states. In the probabilistic setting, we may ask instead whether the probability of reaching unsafe states is below some given threshold. In this paper, we consider probabilistic...
Probabilistic composition of preferences, theory and applications
Parracho Sant'Anna, Annibal
2015-01-01
Putting forward a unified presentation of the features and possible applications of probabilistic preferences composition, and serving as a methodology for decisions employing multiple criteria, this book maximizes reader insights into the evaluation in probabilistic terms and the development of composition approaches that do not depend on assigning weights to the criteria. With key applications in important areas of management such as failure modes, effects analysis and productivity analysis – together with explanations about the application of the concepts involved –this book makes available numerical examples of probabilistic transformation development and probabilistic composition. Useful not only as a reference source for researchers, but also in teaching classes of graduate courses in Production Engineering and Management Science, the key themes of the book will be of especial interest to researchers in the field of Operational Research.
Strategic Team AI Path Plans: Probabilistic Pathfinding
Directory of Open Access Journals (Sweden)
Tng C. H. John
2008-01-01
Full Text Available This paper proposes a novel method to generate strategic team AI pathfinding plans for computer games and simulations using probabilistic pathfinding. This method is inspired by genetic algorithms (Russell and Norvig, 2002, in that, a fitness function is used to test the quality of the path plans. The method generates high-quality path plans by eliminating the low-quality ones. The path plans are generated by probabilistic pathfinding, and the elimination is done by a fitness test of the path plans. This path plan generation method has the ability to generate variation or different high-quality paths, which is desired for games to increase replay values. This work is an extension of our earlier work on team AI: probabilistic pathfinding (John et al., 2006. We explore ways to combine probabilistic pathfinding and genetic algorithm to create a new method to generate strategic team AI pathfinding plans.
Application of probabilistic precipitation forecasts from a ...
African Journals Online (AJOL)
Application of probabilistic precipitation forecasts from a deterministic model towards increasing the lead-time of flash flood forecasts in South Africa. ... An ensemble set of 30 adjacent basins is then identified as ensemble members for each ...
Probabilistic Analysis Methods for Hybrid Ventilation
DEFF Research Database (Denmark)
Brohus, Henrik; Frier, Christian; Heiselberg, Per
This paper discusses a general approach for the application of probabilistic analysis methods in the design of ventilation systems. The aims and scope of probabilistic versus deterministic methods are addressed with special emphasis on hybrid ventilation systems. A preliminary application of stoc...... of stochastic differential equations is presented comprising a general heat balance for an arbitrary number of loads and zones in a building to determine the thermal behaviour under random conditions....
PROBABILISTIC METHODOLOGY OF LOW CYCLE FATIGUE ANALYSIS
Institute of Scientific and Technical Information of China (English)
Jin Hui; Wang Jinnuo; Wang Libin
2003-01-01
The cyclic stress-strain responses (CSSR), Neuber's rule (NR) and cyclic strain-life relation (CSLR) are treated as probabilistic curves in local stress and strain method of low cycle fatigue analy sis. The randomness of loading and the theory of fatigue damage accumulation (TOFDA) are consid ered. The probabilistic analysis of local stress, local strain and fatigue life are constructed based on the first-order Taylor's series expansions. Through this method proposed fatigue reliability analysis can be accomplished.
DEMPSTER-SHAFER THEORY BY PROBABILISTIC REASONING
Directory of Open Access Journals (Sweden)
Chiranjib Mukherjee
2015-10-01
Full Text Available Probabilistic reasoning is used when outcomes are unpredictable. We examine the methods which use probabilistic representations for all knowledge and which reason by propagating the uncertainties can arise from evidence and assertions to conclusions. The uncertainties can arise from an inability to predict outcomes due to unreliable, vague, in complete or inconsistent knowledge. Some approaches taken in Artificial Intelligence system to deal with reasoning under similar types of uncertain conditions.
Probabilistic nature in L/H transition
Energy Technology Data Exchange (ETDEWEB)
Toda, Shinichiro; Itoh, Sanae-I.; Yagi, Masatoshi [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, Kimitaka; Fukuyama, Atsushi
1999-11-01
Statistical picture for an excitation of a plasma transition is examined, which occurs in a strongly turbulent state. The physical picture of transition phenomena is extended to include the statistical variances. The dynamics of the plasma density and turbulent-driven flux is studied with hysteresis nature in the flux-density relation. The probabilistic excitation is predicted and the critical conditions are described by the probabilistic distribution function. The stability for the model equations is also discussed. (author)
Semantics of probabilistic processes an operational approach
Deng, Yuxin
2015-01-01
This book discusses the semantic foundations of concurrent systems with nondeterministic and probabilistic behaviour. Particular attention is given to clarifying the relationship between testing and simulation semantics and characterising bisimulations from metric, logical, and algorithmic perspectives. Besides presenting recent research outcomes in probabilistic concurrency theory, the book exemplifies the use of many mathematical techniques to solve problems in computer science, which is intended to be accessible to postgraduate students in Computer Science and Mathematics. It can also be us
A probabilistic graphical model approach to stochastic multiscale partial differential equations
Energy Technology Data Exchange (ETDEWEB)
Wan, Jiang [Materials Process Design and Control Laboratory, Sibley School of Mechanical and Aerospace Engineering, Cornell University, 101 Frank H.T. Rhodes Hall, Ithaca, NY 14853-3801 (United States); Zabaras, Nicholas, E-mail: nzabaras@gmail.com [Materials Process Design and Control Laboratory, Sibley School of Mechanical and Aerospace Engineering, Cornell University, 101 Frank H.T. Rhodes Hall, Ithaca, NY 14853-3801 (United States); Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853 (United States)
2013-10-01
We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.
Directory of Open Access Journals (Sweden)
Bailey Timothy L
2006-02-01
Full Text Available Abstract Background The structure of proteins may change as a result of the inherent flexibility of some protein regions. We develop and explore probabilistic machine learning methods for predicting a continuum secondary structure, i.e. assigning probabilities to the conformational states of a residue. We train our methods using data derived from high-quality NMR models. Results Several probabilistic models not only successfully estimate the continuum secondary structure, but also provide a categorical output on par with models directly trained on categorical data. Importantly, models trained on the continuum secondary structure are also better than their categorical counterparts at identifying the conformational state for structurally ambivalent residues. Conclusion Cascaded probabilistic neural networks trained on the continuum secondary structure exhibit better accuracy in structurally ambivalent regions of proteins, while sustaining an overall classification accuracy on par with standard, categorical prediction methods.
Directory of Open Access Journals (Sweden)
Resendis-Antonio Osbaldo
2011-07-01
Full Text Available Abstract Background Bacterial nitrogen fixation is the biological process by which atmospheric nitrogen is uptaken by bacteroids located in plant root nodules and converted into ammonium through the enzymatic activity of nitrogenase. In practice, this biological process serves as a natural form of fertilization and its optimization has significant implications in sustainable agricultural programs. Currently, the advent of high-throughput technology supplies with valuable data that contribute to understanding the metabolic activity during bacterial nitrogen fixation. This undertaking is not trivial, and the development of computational methods useful in accomplishing an integrative, descriptive and predictive framework is a crucial issue to decoding the principles that regulated the metabolic activity of this biological process. Results In this work we present a systems biology description of the metabolic activity in bacterial nitrogen fixation. This was accomplished by an integrative analysis involving high-throughput data and constraint-based modeling to characterize the metabolic activity in Rhizobium etli bacteroids located at the root nodules of Phaseolus vulgaris (bean plant. Proteome and transcriptome technologies led us to identify 415 proteins and 689 up-regulated genes that orchestrate this biological process. Taking into account these data, we: 1 extended the metabolic reconstruction reported for R. etli; 2 simulated the metabolic activity during symbiotic nitrogen fixation; and 3 evaluated the in silico results in terms of bacteria phenotype. Notably, constraint-based modeling simulated nitrogen fixation activity in such a way that 76.83% of the enzymes and 69.48% of the genes were experimentally justified. Finally, to further assess the predictive scope of the computational model, gene deletion analysis was carried out on nine metabolic enzymes. Our model concluded that an altered metabolic activity on these enzymes induced
Probabilistic Prediction of Lifetimes of Ceramic Parts
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
Probabilistic Choice, Reversibility, Loops, and Miracles
Stoddart, Bill; Bell, Pete
We consider an addition of probabilistic choice to Abrial's Generalised Substitution Language (GSL) in a form that accommodates the backtracking interpretation of non-deterministic choice. Our formulation is introduced as an extension of the Prospective Values formalism we have developed to describe the results from a backtracking search. Significant features are that probabilistic choice is governed by feasibility, and non-termination is strict. The former property allows us to use probabilistic choice to generate search heuristics. In this paper we are particularly interested in iteration. By demonstrating sub-conjunctivity and monotonicity properties of expectations we give the basis for a fixed point semantics of iterative constructs, and we consider the practical proof treatment of probabilistic loops. We discuss loop invariants, loops with probabilistic behaviour, and probabilistic termination in the context of a formalism in which a small probability of non-termination can dominate our calculations, proposing a method of limits to avoid this problem. The formal programming constructs described have been implemented in a reversible virtual machine (RVM).
Refinement for Probabilistic Systems with Nondeterminism
Directory of Open Access Journals (Sweden)
David Streader
2011-06-01
Full Text Available Before we combine actions and probabilities two very obvious questions should be asked. Firstly, what does "the probability of an action" mean? Secondly, how does probability interact with nondeterminism? Neither question has a single universally agreed upon answer but by considering these questions at the outset we build a novel and hopefully intuitive probabilistic event-based formalism. In previous work we have characterised refinement via the notion of testing. Basically, if one system passes all the tests that another system passes (and maybe more we say the first system is a refinement of the second. This is, in our view, an important way of characterising refinement, via the question "what sort of refinement should I be using?" We use testing in this paper as the basis for our refinement. We develop tests for probabilistic systems by analogy with the tests developed for non-probabilistic systems. We make sure that our probabilistic tests, when performed on non-probabilistic automata, give us refinement relations which agree with for those non-probabilistic automata. We formalise this property as a vertical refinement.
Automating 3D reconstruction using a probabilistic grammar
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2015-10-01
3D reconstruction of objects from point clouds with a laser scanner is still a laborious task in many applications. Automating 3D process is an ongoing research topic and suffers from the complex structure of the data. The main difficulty is due to lack of knowledge of real world objects structure. In this paper, we accumulate such structure knowledge by a probabilistic grammar learned from examples in the same category. The rules of the grammar capture compositional structures at different levels, and a feature dependent probability function is attached for every rule. The learned grammar can be used to parse new 3D point clouds, organize segment patches in a hierarchal way, and assign them meaningful labels. The parsed semantics can be used to guide the reconstruction algorithms automatically. Some examples are given to explain the method.
A probabilistic model for component-based shape synthesis
Kalogerakis, Evangelos
2012-07-01
We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.
Computing Distances between Probabilistic Automata
Directory of Open Access Journals (Sweden)
Mathieu Tracol
2011-07-01
Full Text Available We present relaxed notions of simulation and bisimulation on Probabilistic Automata (PA, that allow some error epsilon. When epsilon is zero we retrieve the usual notions of bisimulation and simulation on PAs. We give logical characterisations of these notions by choosing suitable logics which differ from the elementary ones, L with negation and L without negation, by the modal operator. Using flow networks, we show how to compute the relations in PTIME. This allows the definition of an efficiently computable non-discounted distance between the states of a PA. A natural modification of this distance is introduced, to obtain a discounted distance, which weakens the influence of long term transitions. We compare our notions of distance to others previously defined and illustrate our approach on various examples. We also show that our distance is not expansive with respect to process algebra operators. Although L without negation is a suitable logic to characterise epsilon-(bisimulation on deterministic PAs, it is not for general PAs; interestingly, we prove that it does characterise weaker notions, called a priori epsilon-(bisimulation, which we prove to be NP-difficult to decide.
Pearl A Probabilistic Chart Parser
Magerman, D M; Magerman, David M.; Marcus, Mitchell P.
1994-01-01
This paper describes a natural language parsing algorithm for unrestricted text which uses a probability-based scoring function to select the "best" parse of a sentence. The parser, Pearl, is a time-asynchronous bottom-up chart parser with Earley-type top-down prediction which pursues the highest-scoring theory in the chart, where the score of a theory represents the extent to which the context of the sentence predicts that interpretation. This parser differs from previous attempts at stochastic parsers in that it uses a richer form of conditional probabilities based on context to predict likelihood. Pearl also provides a framework for incorporating the results of previous work in part-of-speech assignment, unknown word models, and other probabilistic models of linguistic features into one parsing tool, interleaving these techniques instead of using the traditional pipeline architecture. In preliminary tests, Pearl has been successful at resolving part-of-speech and word (in speech processing) ambiguity, dete...
Optimal probabilistic dense coding schemes
Kögler, Roger A.; Neves, Leonardo
2017-04-01
Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.
Probabilistic description of traffic flow
Mahnke, R.; Kaupužs, J.; Lubashevsky, I.
2005-03-01
A stochastic description of traffic flow, called probabilistic traffic flow theory, is developed. The general master equation is applied to relatively simple models to describe the formation and dissolution of traffic congestions. Our approach is mainly based on spatially homogeneous systems like periodically closed circular rings without on- and off-ramps. We consider a stochastic one-step process of growth or shrinkage of a car cluster (jam). As generalization we discuss the coexistence of several car clusters of different sizes. The basic problem is to find a physically motivated ansatz for the transition rates of the attachment and detachment of individual cars to a car cluster consistent with the empirical observations in real traffic. The emphasis is put on the analogy with first-order phase transitions and nucleation phenomena in physical systems like supersaturated vapour. The results are summarized in the flux-density relation, the so-called fundamental diagram of traffic flow, and compared with empirical data. Different regimes of traffic flow are discussed: free flow, congested mode as stop-and-go regime, and heavy viscous traffic. The traffic breakdown is studied based on the master equation as well as the Fokker-Planck approximation to calculate mean first passage times or escape rates. Generalizations are developed to allow for on-ramp effects. The calculated flux-density relation and characteristic breakdown times coincide with empirical data measured on highways. Finally, a brief summary of the stochastic cellular automata approach is given.
Probabilistic description of traffic breakdowns.
Kühne, Reinhart; Mahnke, Reinhard; Lubashevsky, Ihor; Kaupuzs, Jevgenijs
2002-06-01
We analyze the characteristic features of traffic breakdown. To describe this phenomenon we apply the probabilistic model regarding the jam emergence as the formation of a large car cluster on a highway. In these terms, the breakdown occurs through the formation of a certain critical nucleus in the metastable vehicle flow, which enables us to confine ourselves to one cluster model. We assume that, first, the growth of the car cluster is governed by attachment of cars to the cluster whose rate is mainly determined by the mean headway distance between the car in the vehicle flow and, maybe, also by the headway distance in the cluster. Second, the cluster dissolution is determined by the car escape from the cluster whose rate depends on the cluster size directly. The latter is justified using the available experimental data for the correlation properties of the synchronized mode. We write the appropriate master equation converted then into the Fokker-Planck equation for the cluster distribution function and analyze the formation of the critical car cluster due to the climb over a certain potential barrier. The further cluster growth irreversibly causes jam formation. Numerical estimates of the obtained characteristics and the experimental data of the traffic breakdown are compared. In particular, we draw a conclusion that the characteristic intrinsic time scale of the breakdown phenomenon should be about 1 min and explain the case why the traffic volume interval inside which traffic breakdown is observed is sufficiently wide.
Dynamical systems probabilistic risk assessment.
Energy Technology Data Exchange (ETDEWEB)
Denman, Matthew R.; Ames, Arlo Leroy
2014-03-01
Probabilistic Risk Assessment (PRA) is the primary tool used to risk-inform nuclear power regulatory and licensing activities. Risk-informed regulations are intended to reduce inherent conservatism in regulatory metrics (e.g., allowable operating conditions and technical specifications) which are built into the regulatory framework by quantifying both the total risk profile as well as the change in the risk profile caused by an event or action (e.g., in-service inspection procedures or power uprates). Dynamical Systems (DS) analysis has been used to understand unintended time-dependent feedbacks in both industrial and organizational settings. In dynamical systems analysis, feedback loops can be characterized and studied as a function of time to describe the changes to the reliability of plant Structures, Systems and Components (SSCs). While DS has been used in many subject areas, some even within the PRA community, it has not been applied toward creating long-time horizon, dynamic PRAs (with time scales ranging between days and decades depending upon the analysis). Understanding slowly developing dynamic effects, such as wear-out, on SSC reliabilities may be instrumental in ensuring a safely and reliably operating nuclear fleet. Improving the estimation of a plant's continuously changing risk profile will allow for more meaningful risk insights, greater stakeholder confidence in risk insights, and increased operational flexibility.
Dynamical systems probabilistic risk assessment
Energy Technology Data Exchange (ETDEWEB)
Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ames, Arlo Leroy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-03-01
Probabilistic Risk Assessment (PRA) is the primary tool used to risk-inform nuclear power regulatory and licensing activities. Risk-informed regulations are intended to reduce inherent conservatism in regulatory metrics (e.g., allowable operating conditions and technical specifications) which are built into the regulatory framework by quantifying both the total risk profile as well as the change in the risk profile caused by an event or action (e.g., in-service inspection procedures or power uprates). Dynamical Systems (DS) analysis has been used to understand unintended time-dependent feedbacks in both industrial and organizational settings. In dynamical systems analysis, feedback loops can be characterized and studied as a function of time to describe the changes to the reliability of plant Structures, Systems and Components (SSCs). While DS has been used in many subject areas, some even within the PRA community, it has not been applied toward creating long-time horizon, dynamic PRAs (with time scales ranging between days and decades depending upon the analysis). Understanding slowly developing dynamic effects, such as wear-out, on SSC reliabilities may be instrumental in ensuring a safely and reliably operating nuclear fleet. Improving the estimation of a plant's continuously changing risk profile will allow for more meaningful risk insights, greater stakeholder confidence in risk insights, and increased operational flexibility.
Energy Technology Data Exchange (ETDEWEB)
Mata, Jonatas F.C. da; Vasconcelos, Vanderley de; Mesquita, Amir Z., E-mail: jonatasfmata@yahoo.com.br, E-mail: vasconv@cdtn.br, E-mail: amir@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2015-07-01
The nuclear accident at Fukushima Daiichi, occurred in Japan in 2011, brought reflections, worldwide, on the management of nuclear and environmental licensing processes of existing nuclear reactors. One of the key lessons learned in this matter, is that the studies of Probabilistic Safety Assessment and Severe Accidents are becoming essential, even in the early stage of a nuclear development project. In Brazil, Brazilian Nuclear Energy Commission, CNEN, conducts the nuclear licensing. The organism responsible for the environmental licensing is Brazilian Institute of Environment and Renewable Natural Resources, IBAMA. In the scope of the licensing processes of these two institutions, the safety analysis is essentially deterministic, complemented by probabilistic studies. The Probabilistic Safety Assessment (PSA) is the study performed to evaluate the behavior of the nuclear reactor in a sequence of events that may lead to the melting of its core. It includes both probability and consequence estimation of these events, which are called Severe Accidents, allowing to obtain the risk assessment of the plant. Thus, the possible shortcomings in the design of systems are identified, providing basis for safety assessment and improving safety. During the environmental licensing, a Quantitative Risk Analysis (QRA), including probabilistic evaluations, is required in order to support the development of the Risk Analysis Study, the Risk Management Program and the Emergency Plan. This article aims to provide an overview of probabilistic risk assessment methodologies and their applications in nuclear and environmental licensing processes of nuclear reactors in Brazil. (author)
Nangia, Nishant; Bhalla, Amneet P. S.; Griffith, Boyce E.; Patankar, Neelesh A.
2016-11-01
Flows over bodies of industrial importance often contain both an attached boundary layer region near the structure and a region of massively separated flow near its trailing edge. When simulating these flows with turbulence modeling, the Reynolds-averaged Navier-Stokes (RANS) approach is more efficient in the former, whereas large-eddy simulation (LES) is more accurate in the latter. Detached-eddy simulation (DES), based on the Spalart-Allmaras model, is a hybrid method that switches from RANS mode of solution in attached boundary layers to LES in detached flow regions. Simulations of turbulent flows over moving structures on a body-fitted mesh incur an enormous remeshing cost every time step. The constraint-based immersed boundary (cIB) method eliminates this operation by placing the structure on a Cartesian mesh and enforcing a rigidity constraint as an additional forcing in the Navier-Stokes momentum equation. We outline the formulation and development of a parallel DES-cIB method using adaptive mesh refinement. We show preliminary validation results for flows past stationary bodies with both attached and separated boundary layers along with results for turbulent flows past moving bodies. This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.
Wallenius, Janne; Maaheimo, Hannu; Eerikäinen, Tero
2016-11-01
The metabolism of butanol producing bacteria Clostridium acetobutylicum was studied in chemostat with glucose limited conditions, butanol stimulus, and as a reference cultivation. COnstraint-Based Reconstruction and Analysis (COBRA) was applied using additional constraints from (13)C Metabolic Flux Analysis ((13)C-MFA) and experimental measurement results. A model consisting of 451 metabolites and 604 reactions was utilized in flux balance analysis (FBA). The stringency of the flux spaces considering different optimization objectives, i.e. growth rate maximization, ATP maintenance, and NADH/NADPH formation, for flux variance analysis (FVA) was studied in the different modelled conditions. Also a previously uncharacterized exopolysaccharide (EPS) produced by C. acetobutylicum was characterized on monosaccharide level. The major monosaccharide components of the EPS were 40n-% rhamnose, 34n-% glucose, 13n-% mannose, 10n-% galactose, and 2n-% arabinose. The EPS was studied to have butanol adsorbing property, 70(butanol)mg(EPS)g(-1) at 37°C. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cotten, Cameron; Reed, Jennifer L
2013-05-01
In recent years, a growing number of metabolic engineering strain design techniques have employed constraint-based modeling to determine metabolic and regulatory network changes which are needed to improve chemical production. These methods use systems-level analysis of metabolism to help guide experimental efforts by identifying deletions, additions, downregulations, and upregulations of metabolic genes that will increase biological production of a desired metabolic product. In this work, we propose a new strain design method with continuous modifications (CosMos) that provides strategies for deletions, downregulations, and upregulations of fluxes that will lead to the production of the desired products. The method is conceptually simple and easy to implement, and can provide additional strategies over current approaches. We found that the method was able to find strain design strategies that required fewer modifications and had larger predicted yields than strategies from previous methods in example and genome-scale networks. Using CosMos, we identified modification strategies for producing a variety of metabolic products, compared strategies derived from Escherichia coli and Saccharomyces cerevisiae metabolic models, and examined how imperfect implementation may affect experimental outcomes. This study gives a powerful and flexible technique for strain engineering and examines some of the unexpected outcomes that may arise when strategies are implemented experimentally.
Cloud Library for Directed Probabilistic Graphical Models
2014-10-01
similar to the jSMILE interface, but it is extended so that the potentially long-running calls are being translated into Hadoop jobs. The...outperforms the Hadoop implementation. For instance, the exponential nature of the number of required independence tests of the PC algorithm...conditioning variables (for constraint-based search based approaches), but the number will still quickly become unacceptable to the point that Hadoop
Exploration of Advanced Probabilistic and Stochastic Design Methods
Mavris, Dimitri N.
2003-01-01
The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Short, Nick, Jr.; Bedet, Jean-Jacques; Bodden, Lee; Boddy, Mark; White, Jim; Beane, John
1994-01-01
The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been operational since October 1, 1993. Its mission is to support the Earth Observing System (EOS) by providing rapid access to EOS data and analysis products, and to test Earth Observing System Data and Information System (EOSDIS) design concepts. One of the challenges is to ensure quick and easy retrieval of any data archived within the DAAC's Data Archive and Distributed System (DADS). Over the 15-year life of EOS project, an estimated several Petabytes (10(exp 15)) of data will be permanently stored. Accessing that amount of information is a formidable task that will require innovative approaches. As a precursor of the full EOS system, the GSFC DAAC with a few Terabits of storage, has implemented a prototype of a constraint-based task and resource scheduler to improve the performance of the DADS. This Honeywell Task and Resource Scheduler (HTRS), developed by Honeywell Technology Center in cooperation the Information Science and Technology Branch/935, the Code X Operations Technology Program, and the GSFC DAAC, makes better use of limited resources, prevents backlog of data, provides information about resources bottlenecks and performance characteristics. The prototype which is developed concurrently with the GSFC Version 0 (V0) DADS, models DADS activities such as ingestion and distribution with priority, precedence, resource requirements (disk and network bandwidth) and temporal constraints. HTRS supports schedule updates, insertions, and retrieval of task information via an Application Program Interface (API). The prototype has demonstrated with a few examples, the substantial advantages of using HTRS over scheduling algorithms such as a First In First Out (FIFO) queue. The kernel scheduling engine for HTRS, called Kronos, has been successfully applied to several other domains such as space shuttle mission scheduling, demand flow manufacturing, and avionics communications
Directory of Open Access Journals (Sweden)
Grigoriy E Pinchuk
2010-06-01
Full Text Available Shewanellae are gram-negative facultatively anaerobic metal-reducing bacteria commonly found in chemically (i.e., redox stratified environments. Occupying such niches requires the ability to rapidly acclimate to changes in electron donor/acceptor type and availability; hence, the ability to compete and thrive in such environments must ultimately be reflected in the organization and utilization of electron transfer networks, as well as central and peripheral carbon metabolism. To understand how Shewanella oneidensis MR-1 utilizes its resources, the metabolic network was reconstructed. The resulting network consists of 774 reactions, 783 genes, and 634 unique metabolites and contains biosynthesis pathways for all cell constituents. Using constraint-based modeling, we investigated aerobic growth of S. oneidensis MR-1 on numerous carbon sources. To achieve this, we (i used experimental data to formulate a biomass equation and estimate cellular ATP requirements, (ii developed an approach to identify cycles (such as futile cycles and circulations, (iii classified how reaction usage affects cellular growth, (iv predicted cellular biomass yields on different carbon sources and compared model predictions to experimental measurements, and (v used experimental results to refine metabolic fluxes for growth on lactate. The results revealed that aerobic lactate-grown cells of S. oneidensis MR-1 used less efficient enzymes to couple electron transport to proton motive force generation, and possibly operated at least one futile cycle involving malic enzymes. Several examples are provided whereby model predictions were validated by experimental data, in particular the role of serine hydroxymethyltransferase and glycine cleavage system in the metabolism of one-carbon units, and growth on different sources of carbon and energy. This work illustrates how integration of computational and experimental efforts facilitates the understanding of microbial metabolism at a
Directory of Open Access Journals (Sweden)
Daugherty Sean
2009-09-01
Full Text Available Abstract Background Rhodoferax ferrireducens is a metabolically versatile, Fe(III-reducing, subsurface microorganism that is likely to play an important role in the carbon and metal cycles in the subsurface. It also has the unique ability to convert sugars to electricity, oxidizing the sugars to carbon dioxide with quantitative electron transfer to graphite electrodes in microbial fuel cells. In order to expand our limited knowledge about R. ferrireducens, the complete genome sequence of this organism was further annotated and then the physiology of R. ferrireducens was investigated with a constraint-based, genome-scale in silico metabolic model and laboratory studies. Results The iterative modeling and experimental approach unveiled exciting, previously unknown physiological features, including an expanded range of substrates that support growth, such as cellobiose and citrate, and provided additional insights into important features such as the stoichiometry of the electron transport chain and the ability to grow via fumarate dismutation. Further analysis explained why R. ferrireducens is unable to grow via photosynthesis or fermentation of sugars like other members of this genus and uncovered novel genes for benzoate metabolism. The genome also revealed that R. ferrireducens is well-adapted for growth in the subsurface because it appears to be capable of dealing with a number of environmental insults, including heavy metals, aromatic compounds, nutrient limitation and oxidative stress. Conclusion This study demonstrates that combining genome-scale modeling with the annotation of a new genome sequence can guide experimental studies and accelerate the understanding of the physiology of under-studied yet environmentally relevant microorganisms.
RAVEN and Dynamic Probabilistic Risk Assessment: Software overview
Energy Technology Data Exchange (ETDEWEB)
Andrea Alfonsi; Cristian Rabiti; Diego Mandelli; Joshua Cogliati; Robert Kinoshita; Antonio Naviglio
2014-09-01
RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7 [], currently under development at the Idaho National Laboratory. Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncertainty quantification platform, capable to agnostically communicate with any system code. This agnosticism has been employed by providing Application Programming Interfaces (APIs). These interfaces are used to allow RAVEN to interact with any code as long as all the parameters that need to be perturbed are accessible by inputs files or via python interfaces. RAVEN is capable to investigate the system response, investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The paper presents an overview of the software capabilities and their implementation schemes followed by some application examples.
Probabilistic Aspects in Spoken Document Retrieval
Directory of Open Access Journals (Sweden)
Macherey Wolfgang
2003-01-01
Full Text Available Accessing information in multimedia databases encompasses a wide range of applications in which spoken document retrieval (SDR plays an important role. In SDR, a set of automatically transcribed speech documents constitutes the files for retrieval, to which a user may address a request in natural language. This paper deals with two probabilistic aspects in SDR. The first part investigates the effect of recognition errors on retrieval performance and inquires the question of why recognition errors have only a little effect on the retrieval performance. In the second part, we present a new probabilistic approach to SDR that is based on interpolations between document representations. Experiments performed on the TREC-7 and TREC-8 SDR task show comparable or even better results for the new proposed method than other advanced heuristic and probabilistic retrieval metrics.
A Model-Driven Probabilistic Parser Generator
Quesada, Luis; Cortijo, Francisco J
2012-01-01
Existing probabilistic scanners and parsers impose hard constraints on the way lexical and syntactic ambiguities can be resolved. Furthermore, traditional grammar-based parsing tools are limited in the mechanisms they allow for taking context into account. In this paper, we propose a model-driven tool that allows for statistical language models with arbitrary probability estimators. Our work on model-driven probabilistic parsing is built on top of ModelCC, a model-based parser generator, and enables the probabilistic interpretation and resolution of anaphoric, cataphoric, and recursive references in the disambiguation of abstract syntax graphs. In order to prove the expression power of ModelCC, we describe the design of a general-purpose natural language parser.
Modal Specifications for Probabilistic Timed Systems
Directory of Open Access Journals (Sweden)
Tingting Han
2013-06-01
Full Text Available Modal automata are a classic formal model for component-based systems that comes equipped with a rich specification theory supporting abstraction, refinement and compositional reasoning. In recent years, quantitative variants of modal automata were introduced for specifying and reasoning about component-based designs for embedded and mobile systems. These respectively generalize modal specification theories for timed and probabilistic systems. In this paper, we define a modal specification language for combined probabilistic timed systems, called abstract probabilistic timed automata, which generalizes existing formalisms. We introduce appropriate syntactic and semantic refinement notions and discuss consistency of our specification language, also with respect to time-divergence. We identify a subclass of our models for which we define the fundamental operations for abstraction, conjunction and parallel composition, and show several compositionality results.
Probabilistic Modeling and Visualization for Bankruptcy Prediction
DEFF Research Database (Denmark)
Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara
2017-01-01
In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...
Probabilistic inversion for chicken processing lines
Energy Technology Data Exchange (ETDEWEB)
Cooke, Roger M. [Department of Mathematics, Delft University of Technology, Delft (Netherlands)]. E-mail: r.m.cooke@ewi.tudelft.nl; Nauta, Maarten [Microbiological Laboratory for Health Protection RIVM, Bilthoven (Netherlands); Havelaar, Arie H. [Microbiological Laboratory for Health Protection RIVM, Bilthoven (Netherlands); Fels, Ine van der [Microbiological Laboratory for Health Protection RIVM, Bilthoven (Netherlands)
2006-10-15
We discuss an application of probabilistic inversion techniques to a model of campylobacter transmission in chicken processing lines. Such techniques are indicated when we wish to quantify a model which is new and perhaps unfamiliar to the expert community. In this case there are no measurements for estimating model parameters, and experts are typically unable to give a considered judgment. In such cases, experts are asked to quantify their uncertainty regarding variables which can be predicted by the model. The experts' distributions (after combination) are then pulled back onto the parameter space of the model, a process termed 'probabilistic inversion'. This study illustrates two such techniques, iterative proportional fitting (IPF) and PARmeter fitting for uncertain models (PARFUM). In addition, we illustrate how expert judgement on predicted observable quantities in combination with probabilistic inversion may be used for model validation and/or model criticism.
Scalable group level probabilistic sparse factor analysis
DEFF Research Database (Denmark)
Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard
2017-01-01
Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...
Probabilistic Grammar: The view from Cognitive Sociolinguistics
Directory of Open Access Journals (Sweden)
Jeroen Claes
2017-06-01
Full Text Available In this paper, I propose that Probabilistic Grammar may benefit from incorporating theoretical insights from Cognitive (SocioLinguistics. I begin by introducing Cognitive Linguistics. Then, I propose a model of the domain-general cognitive constraints (markedness of coding, statistical preemption, and structural priming that condition language (variation. Subsequently, three case studies are presented that test the predictions of this model on three distinct alternations in English and Spanish (variable agreement with existential 'haber', variable agreement with existential 'there be', and Spanish subject pronoun expression. For each case study, the model generates empirically correct predictions. I conclude that, with the support of Cognitive Sociolinguistics, Probabilistic Grammar may move beyond description towards explanation. This article is part of the special collection: Probabilistic grammars: Syntactic variation in a comparative perspective
bayesPop: Probabilistic Population Projections
Directory of Open Access Journals (Sweden)
Hana Ševčíková
2016-12-01
Full Text Available We describe bayesPop, an R package for producing probabilistic population projections for all countries. This uses probabilistic projections of total fertility and life expectancy generated by Bayesian hierarchical models. It produces a sample from the joint posterior predictive distribution of future age- and sex-specific population counts, fertility rates and mortality rates, as well as future numbers of births and deaths. It provides graphical ways of summarizing this information, including trajectory plots and various kinds of probabilistic population pyramids. An expression language is introduced which allows the user to produce the predictive distribution of a wide variety of derived population quantities, such as the median age or the old age dependency ratio. The package produces aggregated projections for sets of countries, such as UN regions or trading blocs. The methodology has been used by the United Nations to produce their most recent official population projections for all countries, published in the World Population Prospects.
Probabilistic Transcriptome Assembly and Variant Graph Genotyping
DEFF Research Database (Denmark)
Sibbesen, Jonas Andreas
the resulting sequencing data should be interpreted. This has over the years spurred the development of many probabilistic methods that are capable of modelling dierent aspects of the sequencing process. Here, I present two of such methods that were developed to each tackle a dierent problem in bioinformatics......, together with an application of the latter method to a large Danish sequencing project. The rst is a probabilistic method for transcriptome assembly that is based on a novel generative model of the RNA sequencing process and provides condence estimates on the assembled transcripts. We show...... that this approach outperforms existing state-of-the-art methods measured using sensitivity and precision on both simulated and real data. The second is a novel probabilistic method that uses exact alignment of k-mers to a set of variants graphs to provide unbiased estimates of genotypes in a population...
Probabilistic Forecasting of the Wave Energy Flux
DEFF Research Database (Denmark)
Pinson, Pierre; Reikard, G.; Bidlot, J.-R.
2012-01-01
markets. A methodology for the probabilistic forecasting of the wave energy flux is introduced, based on a log-Normal assumption for the shape of predictive densities. It uses meteorological forecasts (from the European Centre for Medium-range Weather Forecasts – ECMWF) and local wave measurements......Wave energy will certainly have a significant role to play in the deployment of renewable energy generation capacities. As with wind and solar, probabilistic forecasts of wave power over horizons of a few hours to a few days are required for power system operation as well as trading in electricity...... as input. The parameters of the models involved are adaptively and recursively estimated. The methodology is evaluated for 13 locations around North-America over a period of 15months. The issued probabilistic forecasts substantially outperform the various benchmarks considered, with improvements between 6...
Directory of Open Access Journals (Sweden)
Bushell Michael E
2011-05-01
Full Text Available Abstract Background Constraint-based approaches facilitate the prediction of cellular metabolic capabilities, based, in turn on predictions of the repertoire of enzymes encoded in the genome. Recently, genome annotations have been used to reconstruct genome scale metabolic reaction networks for numerous species, including Homo sapiens, which allow simulations that provide valuable insights into topics, including predictions of gene essentiality of pathogens, interpretation of genetic polymorphism in metabolic disease syndromes and suggestions for novel approaches to microbial metabolic engineering. These constraint-based simulations are being integrated with the functional genomics portals, an activity that requires efficient implementation of the constraint-based simulations in the web-based environment. Results Here, we present Acorn, an open source (GNU GPL grid computing system for constraint-based simulations of genome scale metabolic reaction networks within an interactive web environment. The grid-based architecture allows efficient execution of computationally intensive, iterative protocols such as Flux Variability Analysis, which can be readily scaled up as the numbers of models (and users increase. The web interface uses AJAX, which facilitates efficient model browsing and other search functions, and intuitive implementation of appropriate simulation conditions. Research groups can install Acorn locally and create user accounts. Users can also import models in the familiar SBML format and link reaction formulas to major functional genomics portals of choice. Selected models and simulation results can be shared between different users and made publically available. Users can construct pathway map layouts and import them into the server using a desktop editor integrated within the system. Pathway maps are then used to visualise numerical results within the web environment. To illustrate these features we have deployed Acorn and created a
Constraint Processing in Lifted Probabilistic Inference
Kisynski, Jacek
2012-01-01
First-order probabilistic models combine representational power of first-order logic with graphical models. There is an ongoing effort to design lifted inference algorithms for first-order probabilistic models. We analyze lifted inference from the perspective of constraint processing and, through this viewpoint, we analyze and compare existing approaches and expose their advantages and limitations. Our theoretical results show that the wrong choice of constraint processing method can lead to exponential increase in computational complexity. Our empirical tests confirm the importance of constraint processing in lifted inference. This is the first theoretical and empirical study of constraint processing in lifted inference.
The probabilistic approach to human reasoning.
Oaksford, M; Chater, N
2001-08-01
A recent development in the cognitive science of reasoning has been the emergence of a probabilistic approach to the behaviour observed on ostensibly logical tasks. According to this approach the errors and biases documented on these tasks occur because people import their everyday uncertain reasoning strategies into the laboratory. Consequently participants' apparently irrational behaviour is the result of comparing it with an inappropriate logical standard. In this article, we contrast the probabilistic approach with other approaches to explaining rationality, and then show how it has been applied to three main areas of logical reasoning: conditional inference, Wason's selection task and syllogistic reasoning.
Probabilistic Design of Wave Energy Devices
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Kofoed, Jens Peter; Ferreira, C.B.
2011-01-01
Wave energy has a large potential for contributing significantly to production of renewable energy. However, the wave energy sector is still not able to deliver cost competitive and reliable solutions. But the sector has already demonstrated several proofs of concepts. The design of wave energy...... and advocate for a probabilistic design approach, as it is assumed (in other areas this has been demonstrated) that this leads to more economical designs compared to designs based on deterministic methods. In the present paper a general framework for probabilistic design and reliability analysis of wave energy...
Probabilistic Durability Analysis in Advanced Engineering Design
Directory of Open Access Journals (Sweden)
A. Kudzys
2000-01-01
Full Text Available Expedience of probabilistic durability concepts and approaches in advanced engineering design of building materials, structural members and systems is considered. Target margin values of structural safety and serviceability indices are analyzed and their draft values are presented. Analytical methods of the cumulative coefficient of correlation and the limit transient action effect for calculation of reliability indices are given. Analysis can be used for probabilistic durability assessment of carrying and enclosure metal, reinforced concrete, wood, plastic, masonry both homogeneous and sandwich or composite structures and some kinds of equipments. Analysis models can be applied in other engineering fields.
Probabilistic assessment of uncertain adaptive hybrid composites
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1994-01-01
Adaptive composite structures using actuation materials, such as piezoelectric fibers, were assessed probabilistically utilizing intraply hybrid composite mechanics in conjunction with probabilistic composite structural analysis. Uncertainties associated with the actuation material as well as the uncertainties in the regular (traditional) composite material properties were quantified and considered in the assessment. Static and buckling analyses were performed for rectangular panels with various boundary conditions and different control arrangements. The probability density functions of the structural behavior, such as maximum displacement and critical buckling load, were computationally simulated. The results of the assessment indicate that improved design and reliability can be achieved with actuation material.
Quantum logic networks for probabilistic teleportation
Institute of Scientific and Technical Information of China (English)
刘金明; 张永生; 郭光灿
2003-01-01
By means of the primitive operations consisting of single-qubit gates, two-qubit controlled-not gates, Von Neuman measurement and classically controlled operations, we construct efficient quantum logic networks for implementing probabilistic teleportation of a single qubit, atwo-particle entangled state, and an N-particle entanglement. Based on the quantum networks, we show that after the partially entangled states are concentrated into maximal entanglement,the above three kinds of probabilistic teleportation are the same as the standard teleportation using the corresponding maximally entangled states as the quantum channels.
Quantum logic networks for probabilistic teleportation
Institute of Scientific and Technical Information of China (English)
刘金明; 张永生; 等
2003-01-01
By eans of the primitive operations consisting of single-qubit gates.two-qubit controlled-not gates,Von Neuman measurement and classically controlled operations.,we construct efficient quantum logic networks for implementing probabilistic teleportation of a single qubit,a two-particle entangled state,and an N-particle entanglement.Based on the quantum networks,we show that after the partially entangled states are concentrated into maximal entanglement,the above three kinds of probabilistic teleportation are the same as the standard teleportation using the corresponding maximally entangled states as the quantum channels.
Why are probabilistic laws governing quantum mechanics and neurobiology?
Kröger, H
2004-01-01
We address the question: Why are dynamical laws governing in quantum mechanics and in neuroscience of probabilistic nature instead of being deterministic? We discuss some ideas showing that the probabilistic option offers advantages over the deterministic one.
Why are probabilistic laws governing quantum mechanics and neurobiology?
Kröger, Helmut
2005-08-01
We address the question: Why are dynamical laws governing in quantum mechanics and in neuroscience of probabilistic nature instead of being deterministic? We discuss some ideas showing that the probabilistic option offers advantages over the deterministic one.
A Generative Model for Deep Convolutional Learning
Pu, Yunchen; Yuan, Xin; Carin, Lawrence
2015-01-01
A generative model is developed for deep (multi-layered) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up (pretraining) and top-down (refinement) probabilistic learning. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images, and excellent classification results are obtained on the MNIST and Caltech 101 datasets.
Directory of Open Access Journals (Sweden)
Mohammad M. Herzallah
2017-06-01
Full Text Available Major depressive disorder (MDD is the most common non-motor manifestation of Parkinson’s disease (PD affecting 50% of patients. However, little is known about the cognitive correlates of MDD in PD. Using a computer-based cognitive task that dissociates learning from positive and negative feedback, we tested four groups of subjects: (1 patients with PD with comorbid MDD, (2 patients with PD without comorbid MDD, (3 matched patients with MDD alone (without PD, and (4 matched healthy control subjects. Furthermore, we used a mathematical model of decision-making to fit both choice and response time data, allowing us to detect and characterize differences between the groups that are not revealed by cognitive results. The groups did not differ in learning accuracy from negative feedback, but the MDD groups (PD patients with MDD and patients with MDD alone exhibited a selective impairment in learning accuracy from positive feedback when compared to the non-MDD groups (PD patients without MDD and healthy subjects. However, response time in positive feedback trials in the PD groups (both with and without MDD was significantly slower than the non-PD groups (MDD and healthy groups. While faster response time usually correlates with poor learning accuracy, it was paradoxical in PD groups, with PD patients with MDD having impaired learning accuracy and PD patients without MDD having intact learning accuracy. Mathematical modeling showed that both MDD groups (PD with MDD and MDD alone were significantly slower than non-MDD groups in the rate of accumulation of information for stimuli trained by positive feedback, which can lead to lower response accuracy. Conversely, modeling revealed that both PD groups (PD with MDD and PD alone required more evidence than other groups to make responses, thus leading to slower response times. These results suggest that PD patients with MDD exhibit cognitive profiles with mixed traits characteristic of both MDD and PD
Herzallah, Mohammad M; Khdour, Hussain Y; Taha, Ahmad B; Elmashala, Amjad M; Mousa, Hamza N; Taha, Mohamad B; Ghanim, Zaid; Sehwail, Mahmud M; Misk, Adel J; Balsdon, Tarryn; Moustafa, Ahmed A; Myers, Catherine E; Gluck, Mark A
2017-01-01
Major depressive disorder (MDD) is the most common non-motor manifestation of Parkinson's disease (PD) affecting 50% of patients. However, little is known about the cognitive correlates of MDD in PD. Using a computer-based cognitive task that dissociates learning from positive and negative feedback, we tested four groups of subjects: (1) patients with PD with comorbid MDD, (2) patients with PD without comorbid MDD, (3) matched patients with MDD alone (without PD), and (4) matched healthy control subjects. Furthermore, we used a mathematical model of decision-making to fit both choice and response time data, allowing us to detect and characterize differences between the groups that are not revealed by cognitive results. The groups did not differ in learning accuracy from negative feedback, but the MDD groups (PD patients with MDD and patients with MDD alone) exhibited a selective impairment in learning accuracy from positive feedback when compared to the non-MDD groups (PD patients without MDD and healthy subjects). However, response time in positive feedback trials in the PD groups (both with and without MDD) was significantly slower than the non-PD groups (MDD and healthy groups). While faster response time usually correlates with poor learning accuracy, it was paradoxical in PD groups, with PD patients with MDD having impaired learning accuracy and PD patients without MDD having intact learning accuracy. Mathematical modeling showed that both MDD groups (PD with MDD and MDD alone) were significantly slower than non-MDD groups in the rate of accumulation of information for stimuli trained by positive feedback, which can lead to lower response accuracy. Conversely, modeling revealed that both PD groups (PD with MDD and PD alone) required more evidence than other groups to make responses, thus leading to slower response times. These results suggest that PD patients with MDD exhibit cognitive profiles with mixed traits characteristic of both MDD and PD, furthering
A common fixed point for operators in probabilistic normed spaces
Energy Technology Data Exchange (ETDEWEB)
Ghaemi, M.B. [Faculty of Mathematics, Iran University of Science and Technology, Narmak, Tehran (Iran, Islamic Republic of)], E-mail: mghaemi@iust.ac.ir; Lafuerza-Guillen, Bernardo [Department of Applied Mathematics, University of Almeria, Almeria (Spain)], E-mail: blafuerz@ual.es; Razani, A. [Department of Mathematics, Faculty of Science, I. Kh. International University, P.O. Box 34194-288, Qazvin (Iran, Islamic Republic of)], E-mail: razani@ikiu.ac.ir
2009-05-15
Probabilistic Metric spaces was introduced by Karl Menger. Alsina, Schweizer and Sklar gave a general definition of probabilistic normed space based on the definition of Menger [Alsina C, Schweizer B, Sklar A. On the definition of a probabilistic normed spaces. Aequationes Math 1993;46:91-8]. Here, we consider the equicontinuity of a class of linear operators in probabilistic normed spaces and finally, a common fixed point theorem is proved. Application to quantum Mechanic is considered.0.
Hsu, Anne S; Vitanyi, Paul M B
2010-01-01
There is much debate over the degree to which language learning is governed by innate language-specific biases, or acquired through cognition-general principles. Here we examine the probabilistic language acquisition hypothesis on three levels: We outline a novel theoretical result showing that it is possible to learn the exact generative model underlying a wide class of languages, purely from observing samples of the language. We then describe a recently proposed practical framework, which quantifies natural language learnability, allowing specific learnability predictions to be made for the first time. In previous work, this framework was used to make learnability predictions for a wide variety of linguistic constructions, for which learnability has been much debated. Here, we present a new experiment which tests these learnability predictions. We find that our experimental results support the possibility that these linguistic constructions are acquired probabilistically from cognition-general principles.
Probabilistic Wind Power Forecasting with Hybrid Artificial Neural Networks
DEFF Research Database (Denmark)
Wan, Can; Song, Yonghua; Xu, Zhao
2016-01-01
probabilities of prediction errors provide an alternative yet effective solution. This article proposes a hybrid artificial neural network approach to generate prediction intervals of wind power. An extreme learning machine is applied to conduct point prediction of wind power and estimate model uncertainties...... via a bootstrap technique. Subsequently, the maximum likelihood estimation method is employed to construct a distinct neural network to estimate the noise variance of forecasting results. The proposed approach has been tested on multi-step forecasting of high-resolution (10-min) wind power using...... actual wind power data from Denmark. The numerical results demonstrate that the proposed hybrid artificial neural network approach is effective and efficient for probabilistic forecasting of wind power and has high potential in practical applications....
Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.
He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej
2011-12-01
Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.
Probabilistic Modeling of Aircraft Trajectories for Dynamic Separation Volumes
Lewis, Timothy A.
2016-01-01
With a proliferation of new and unconventional vehicles and operations expected in the future, the ab initio airspace design will require new approaches to trajectory prediction for separation assurance and other air traffic management functions. This paper presents an approach to probabilistic modeling of the trajectory of an aircraft when its intent is unknown. The approach uses a set of feature functions to constrain a maximum entropy probability distribution based on a set of observed aircraft trajectories. This model can be used to sample new aircraft trajectories to form an ensemble reflecting the variability in an aircraft's intent. The model learning process ensures that the variability in this ensemble reflects the behavior observed in the original data set. Computational examples are presented.
Directory of Open Access Journals (Sweden)
Gang Qiao
2016-05-01
Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric
Probabilistic programming: a true vedification Challenge
Katoen, Joost-Pieter; Finkbeiner, Bernd; Pu, Geguang; Zhang, Lijun
2015-01-01
Probabilistic programs [6] are sequential programs, written in languages like C, Java, Scala, or ML, with two added constructs: (1) the ability to draw values at random from probability distributions, and (2) the ability to condition values of variables in a program through observations. For a compr
Probabilistic methods for service life predictions
Siemes, A.J.M.
1999-01-01
Nowadays it is commonly accepted that the safety of structures should be expressed in terms of reli-ability. This means as the probability of failure. In literature [1, 2, 3, and 4] the bases have been given for the calculation of the failure probability. Making probabilistic calculations can be don
Probabilistic Meteorological Characterization for Turbine Loads
DEFF Research Database (Denmark)
Kelly, Mark C.; Larsen, Gunner Chr.; Dimitrov, Nikolay Krasimirov;
2014-01-01
Beyond the existing, limited IEC prescription to describe fatigue loads on wind turbines, we look towards probabilistic characterization of the loads via analogous characterization of the atmospheric flow, particularly for today's "taller" turbines with rotors well above the atmospheric surface....... These are used as input to loads calculation, and with a statistical loads output description, they allow for improved design and loads calculations....
On Probabilistic Automata in Continuous Time
DEFF Research Database (Denmark)
Eisentraut, Christian; Hermanns, Holger; Zhang, Lijun
2010-01-01
their compositionality properties. Weak bisimulation is partly oblivious to the probabilistic branching structure, in order to reflect some natural equalities in this spectrum of models. As a result, the standard way to associate a stochastic process to a generalised stochastic Petri net can be proven sound with respect...
Pigeons' Discounting of Probabilistic and Delayed Reinforcers
Green, Leonard; Myerson, Joel; Calvert, Amanda L.
2010-01-01
Pigeons' discounting of probabilistic and delayed food reinforcers was studied using adjusting-amount procedures. In the probability discounting conditions, pigeons chose between an adjusting number of food pellets contingent on a single key peck and a larger, fixed number of pellets contingent on completion of a variable-ratio schedule. In the…
Enhancing Automated Test Selection in Probabilistic Networks
Sent, D.; van der Gaag, L.C.; Bellazzi, R; Abu-Hanna, A; Hunter, J
2007-01-01
Most test-selection algorithms currently in use with probabilistic networks select variables myopically, that is, test variables are selected sequentially, on a one-by-one basis, based upon expected information gain. While myopic test selection is not realistic for many medical applications, non-myo
Relevance feedback in probabilistic multimedia retrieval
Boldareva, L.; Hiemstra, D.; Jonker, W.
2003-01-01
In this paper we explore a new view on data organisation and retrieval in a (multimedia) collection. We use probabilistic framework for indexing and interactive retrieval of the data, which enable to fill the semantic gap. Semi-automated experiments with TREC-2002 video collection showed that our ap
Sampling Techniques for Probabilistic Roadmap Planners
Geraerts, R.J.; Overmars, M.H.
2004-01-01
The probabilistic roadmap approach is a commonly used motion planning technique. A crucial ingredient of the approach is a sampling algorithm that samples the configuration space of the moving object for free configurations. Over the past decade many sampling techniques have been proposed. It is
Probabilistic Damage Stability Calculations for Ships
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
1996-01-01
The aim of these notes is to provide background material for the present probabilistic damage stability rules fro dry cargo ships.The formulas for the damage statistics are derived and shortcomings as well as possible improvements are discussed. The advantage of the definiton of fictitious...
Strong Ideal Convergence in Probabilistic Metric Spaces
Indian Academy of Sciences (India)
Celaleddin Şençimen; Serpil Pehlivan
2009-06-01
In the present paper we introduce the concepts of strongly ideal convergent sequence and strong ideal Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong ideal limit points and the strong ideal cluster points of a sequence in this space and investigate some properties of these concepts.
Financial Markets Analysis by Probabilistic Fuzzy Modelling
J.H. van den Berg (Jan); W.-M. van den Bergh (Willem-Max); U. Kaymak (Uzay)
2003-01-01
textabstractFor successful trading in financial markets, it is important to develop financial models where one can identify different states of the market for modifying one???s actions. In this paper, we propose to use probabilistic fuzzy systems for this purpose. We concentrate on Takagi???Sugeno (
Probabilistic decision graphs for optimization under uncertainty
DEFF Research Database (Denmark)
Jensen, Finn V.; Nielsen, Thomas Dyhre
2011-01-01
This paper provides a survey on probabilistic decision graphs for modeling and solving decision problems under uncertainty. We give an introduction to influence diagrams, which is a popular framework for representing and solving sequential decision problems with a single decision maker. As the me...
Relevance feedback in probabilistic multimedia retrieval
Boldareva, L.; Hiemstra, Djoerd; Jonker, Willem
2003-01-01
In this paper we explore a new view on data organisation and retrieval in a (multimedia) collection. We use probabilistic framework for indexing and interactive retrieval of the data, which enable to fill the semantic gap. Semi-automated experiments with TREC-2002 video collection showed that our ap
Probabilistic safety goals. Phase 3 - Status report
Energy Technology Data Exchange (ETDEWEB)
Holmberg, J.-E. (VTT (Finland)); Knochenhauer, M. (Relcon Scandpower AB, Sundbyberg (Sweden))
2009-07-15
The first phase of the project (2006) described the status, concepts and history of probabilistic safety goals for nuclear power plants. The second and third phases (2007-2008) have provided guidance related to the resolution of some of the problems identified, and resulted in a common understanding regarding the definition of safety goals. The basic aim of phase 3 (2009) has been to increase the scope and level of detail of the project, and to start preparations of a guidance document. Based on the conclusions from the previous project phases, the following issues have been covered: 1) Extension of international overview. Analysis of results from the questionnaire performed within the ongoing OECD/NEA WGRISK activity on probabilistic safety criteria, including participation in the preparation of the working report for OECD/NEA/WGRISK (to be finalised in phase 4). 2) Use of subsidiary criteria and relations between these (to be finalised in phase 4). 3) Numerical criteria when using probabilistic analyses in support of deterministic safety analysis (to be finalised in phase 4). 4) Guidance for the formulation, application and interpretation of probabilistic safety criteria (to be finalised in phase 4). (LN)
Probabilistic Resource Analysis by Program Transformation
DEFF Research Database (Denmark)
Kirkeby, Maja Hanne; Rosendahl, Mads
2016-01-01
The aim of a probabilistic resource analysis is to derive a probability distribution of possible resource usage for a program from a probability distribution of its input. We present an automated multi-phase rewriting based method to analyze programs written in a subset of C. It generates...
A Probabilistic Framework for Curve Evolution
DEFF Research Database (Denmark)
Dahl, Vedrana Andersen
2017-01-01
approach include ability to handle textured images, simple generalization to multiple regions, and efficiency in computation. We test our probabilistic framework in combination with parametric (snakes) and geometric (level-sets) curves. The experimental results on composed and natural images demonstrate...
A Comparative Study of Probabilistic Roadmap Planners
Geraerts, R.J.; Overmars, M.H.
2004-01-01
The probabilistic roadmap approach is one of the leading motion planning techniques. Over the past eight years the technique has been studied by many different researchers. This has led to a large number of variants of the approach, each with its own merits. It is difficult to compare the different
Dialectical Multivalued Logic and Probabilistic Theory
Directory of Open Access Journals (Sweden)
José Luis Usó Doménech
2017-02-01
Full Text Available There are two probabilistic algebras: one for classical probability and the other for quantum mechanics. Naturally, it is the relation to the object that decides, as in the case of logic, which algebra is to be used. From a paraconsistent multivalued logic therefore, one can derive a probability theory, adding the correspondence between truth value and fortuity.
Balkanization and Unification of Probabilistic Inferences
Yu, Chong-Ho
2005-01-01
Many research-related classes in social sciences present probability as a unified approach based upon mathematical axioms, but neglect the diversity of various probability theories and their associated philosophical assumptions. Although currently the dominant statistical and probabilistic approach is the Fisherian tradition, the use of Fisherian…
Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi
2017-08-01
Probabilistic thinking is very important in human life especially in responding to situation which possibly occured or situation containing uncertainty elements. It is necessary to develop students' probabilistic thinking since in elementary school by teaching probability. Based on mathematics curriculum in Indonesia, probability is firstly introduced to ninth grade students. Though, some research showed that low-grade students were successful in solving probability tasks, even in pre school. This study is aimed to explore students' probabilistic thinking of elementary school; high and low math ability in solving probability tasks. Qualitative approach was chosen to describe in depth related to students' probabilistic thinking. The results showed that high and low math ability students were difference in responding to 1 and 2 dimensional sample space tasks, and probability comparison tasks of drawing marker and contextual. Representation used by high and low math ability students were also difference in responding to contextual probability of an event task and probability comparison task of rotating spinner. This study is as reference to mathematics curriculum developers of elementary school in Indonesia. In this case to introduce probability material and teach probability through spinner, as media in learning.
Probabilistic Distance for Mixtures of Independent Component Analyzers.
Safont, Gonzalo; Salazar, Addisson; Vergara, Luis; Gomez, Enriqueta; Villanueva, Vicente
2017-02-24
Independent component analysis (ICA) is a blind source separation technique where data are modeled as linear combinations of several independent non-Gaussian sources. The independence and linear restrictions are relaxed using several ICA mixture models (ICAMMs) obtaining a two-layer artificial neural network structure. This allows for dependence between sources of different classes, and thus, a myriad of multidimensional probability density functions can be accurate modeled. This paper proposes a new probabilistic distance (PDI) between the parameters learned for two ICAMMs. The PDI is computed explicitly, unlike the popular Kullback-Leibler divergence (KLD) and other similar metrics, removing the need for numerical integration. Furthermore, the PDI is symmetric and bounded within 0 and 1, which enables its use as a posterior probability in fusion approaches. In this paper, the PDI is employed for change detection by measuring the distance between two ICAMMs learned in consecutive time windows. The changes might be associated with relevant states from a process under analysis that are explicitly reflected in the learned ICAMM parameters. The proposed distance was tested in two challenging applications using simulated and real data: 1) detecting flaws in materials using ultrasounds and 2) detecting changes in electroencephalography signals from humans performing neuropsychological tests. The results demonstrate that the PDI outperforms the KLD in change-detection capabilities.
Agent-Oriented Probabilistic Logic Programming
Institute of Scientific and Technical Information of China (English)
Jie Wang; Shi-Er Ju; Chun-Nian Liu
2006-01-01
Currently, agent-based computing is an active research area, and great efforts have been made towards the agent-oriented programming both from a theoretical and practical view. However, most of them assume that there is no uncertainty in agents' mental state and their environment. In other words, under this assumption agent developers are just allowed to specify how his agent acts when the agent is 100% sure about what is true/false. In this paper, this unrealistic assumption is removed and a new agent-oriented probabilistic logic programming language is proposed, which can deal with uncertain information about the world. The programming language is based on a combination of features of probabilistic logic programming and imperative programming.
Probabilistic forecasting and Bayesian data assimilation
Reich, Sebastian
2015-01-01
In this book the authors describe the principles and methods behind probabilistic forecasting and Bayesian data assimilation. Instead of focusing on particular application areas, the authors adopt a general dynamical systems approach, with a profusion of low-dimensional, discrete-time numerical examples designed to build intuition about the subject. Part I explains the mathematical framework of ensemble-based probabilistic forecasting and uncertainty quantification. Part II is devoted to Bayesian filtering algorithms, from classical data assimilation algorithms such as the Kalman filter, variational techniques, and sequential Monte Carlo methods, through to more recent developments such as the ensemble Kalman filter and ensemble transform filters. The McKean approach to sequential filtering in combination with coupling of measures serves as a unifying mathematical framework throughout Part II. Assuming only some basic familiarity with probability, this book is an ideal introduction for graduate students in ap...
Probabilistic Parsing Using Left Corner Language Models
Manning, C D; Manning, Christopher D.; Carpenter, Bob
1997-01-01
We introduce a novel parser based on a probabilistic version of a left-corner parser. The left-corner strategy is attractive because rule probabilities can be conditioned on both top-down goals and bottom-up derivations. We develop the underlying theory and explain how a grammar can be induced from analyzed data. We show that the left-corner approach provides an advantage over simple top-down probabilistic context-free grammars in parsing the Wall Street Journal using a grammar induced from the Penn Treebank. We also conclude that the Penn Treebank provides a fairly weak testbed due to the flatness of its bracketings and to the obvious overgeneration and undergeneration of its induced grammar.
Probabilistic Universality in two-dimensional Dynamics
Lyubich, Mikhail
2011-01-01
In this paper we continue to explore infinitely renormalizable H\\'enon maps with small Jacobian. It was shown in [CLM] that contrary to the one-dimensional intuition, the Cantor attractor of such a map is non-rigid and the conjugacy with the one-dimensional Cantor attractor is at most 1/2-H\\"older. Another formulation of this phenomenon is that the scaling structure of the H\\'enon Cantor attractor differs from its one-dimensional counterpart. However, in this paper we prove that the weight assigned by the canonical invariant measure to these bad spots tends to zero on microscopic scales. This phenomenon is called {\\it Probabilistic Universality}. It implies, in particular, that the Hausdorff dimension of the canonical measure is universal. In this way, universality and rigidity phenomena of one-dimensional dynamics assume a probabilistic nature in the two-dimensional world.
Efficient Probabilistic Inference with Partial Ranking Queries
Huang, Jonathan; Guestrin, Carlos E
2012-01-01
Distributions over rankings are used to model data in various settings such as preference analysis and political elections. The factorial size of the space of rankings, however, typically forces one to make structural assumptions, such as smoothness, sparsity, or probabilistic independence about these underlying distributions. We approach the modeling problem from the computational principle that one should make structural assumptions which allow for efficient calculation of typical probabilistic queries. For ranking models, "typical" queries predominantly take the form of partial ranking queries (e.g., given a user's top-k favorite movies, what are his preferences over remaining movies?). In this paper, we argue that riffled independence factorizations proposed in recent literature [7, 8] are a natural structural assumption for ranking distributions, allowing for particularly efficient processing of partial ranking queries.
A probabilistic model of RNA conformational space
DEFF Research Database (Denmark)
Frellsen, Jes; Moltke, Ida; Thiim, Martin;
2009-01-01
The increasing importance of non-coding RNA in biology and medicine has led to a growing interest in the problem of RNA 3-D structure prediction. As is the case for proteins, RNA 3-D structure prediction methods require two key ingredients: an accurate energy function and a conformational sampling......, the discrete nature of the fragments necessitates the use of carefully tuned, unphysical energy functions, and their non-probabilistic nature impairs unbiased sampling. We offer a solution to the sampling problem that removes these important limitations: a probabilistic model of RNA structure that allows...... conformations for 9 out of 10 test structures, solely using coarse-grained base-pairing information. In conclusion, the method provides a theoretical and practical solution for a major bottleneck on the way to routine prediction and simulation of RNA structure and dynamics in atomic detail....
Significance testing as perverse probabilistic reasoning
Directory of Open Access Journals (Sweden)
Westover Kenneth D
2011-02-01
Full Text Available Abstract Truth claims in the medical literature rely heavily on statistical significance testing. Unfortunately, most physicians misunderstand the underlying probabilistic logic of significance tests and consequently often misinterpret their results. This near-universal misunderstanding is highlighted by means of a simple quiz which we administered to 246 physicians at two major academic hospitals, on which the proportion of incorrect responses exceeded 90%. A solid understanding of the fundamental concepts of probability theory is becoming essential to the rational interpretation of medical information. This essay provides a technically sound review of these concepts that is accessible to a medical audience. We also briefly review the debate in the cognitive sciences regarding physicians' aptitude for probabilistic inference.
A Bayesian Concept Learning Approach to Crowdsourcing
DEFF Research Database (Denmark)
Viappiani, Paolo Renato; Zilles, Sandra; Hamilton, Howard J.
2011-01-01
We develop a Bayesian approach to concept learning for crowdsourcing applications. A probabilistic belief over possible concept definitions is maintained and updated according to (noisy) observations from experts, whose behaviors are modeled using discrete types. We propose recommendation...
Probabilistic, meso-scale flood loss modelling
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2016-04-01
Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.
Incorporating psychological influences in probabilistic cost analysis
Energy Technology Data Exchange (ETDEWEB)
Kujawski, Edouard; Alvaro, Mariana; Edwards, William
2004-01-08
Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations that are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for
Probabilistic Output Analysis by Program Manipulation
Rosendahl, Mads; Kirkeby, Maja H.
2015-01-01
The aim of a probabilistic output analysis is to derive a probability distribution of possible output values for a program from a probability distribution of its input. We present a method for performing static output analysis, based on program transformation techniques. It generates a probability function as a possibly uncomputable expression in an intermediate language. This program is then analyzed, transformed, and approximated. The result is a closed form expression that computes an over...
Bayesian Probabilistic Projection of International Migration.
Azose, Jonathan J; Raftery, Adrian E
2015-10-01
We propose a method for obtaining joint probabilistic projections of migration for all countries, broken down by age and sex. Joint trajectories for all countries are constrained to satisfy the requirement of zero global net migration. We evaluate our model using out-of-sample validation and compare point projections to the projected migration rates from a persistence model similar to the method used in the United Nations' World Population Prospects, and also to a state-of-the-art gravity model.
Treatment of Uncertainties in Probabilistic Tsunami Hazard
Thio, H. K.
2012-12-01
Over the last few years, we have developed a framework for developing probabilistic tsunami inundation maps, which includes comprehensive quantification of earthquake recurrence as well as uncertainties, and applied it to the development of a tsunami hazard map of California. The various uncertainties in tsunami source and propagation models are an integral part of a comprehensive probabilistic tsunami hazard analysis (PTHA), and often drive the hazard at low probability levels (i.e. long return periods). There is no unique manner in which uncertainties are included in the analysis although in general, we distinguish between "natural" or aleatory variability, such as slip distribution and event magnitude, and uncertainties due to an incomplete understanding of the behavior of the earth, called epistemic uncertainties, such as scaling relations and rupture segmentation. Aleatory uncertainties are typically included through integration over distribution functions based on regression analyses, whereas epistemic uncertainties are included using logic trees. We will discuss how the different uncertainties were included in our recent probabilistic tsunami inundation maps for California, and their relative importance on the final results. Including these uncertainties in offshore exceedance waveheights is straightforward, but the problem becomes more complicated once the non-linearity of near-shore propagation and inundation are encountered. By using the probabilistic off-shore waveheights as input level for the inundation models, the uncertainties up to that point can be included in the final maps. PTHA provides a consistent analysis of tsunami hazard and will become an important tool in diverse areas such as coastal engineering and land use planning. The inclusive nature of the analysis, where few assumptions are made a-priori as to which sources are significant, means that a single analysis can provide a comprehensive view of the hazard and its dominant sources
PRISMATIC: Unified Hierarchical Probabilistic Verification Tool
2011-09-01
Lecture Notes in Computer Science 5123, pp. 135–148...and Peyronnet, S., “Approximate Probabilistic Model Checking,”. in VMCAI, Vol. 2937 of Lecture Notes in Computer Science , pp. 73–84, 2004. 18 Hermanns...Systems,” in TACAS, Vol. 3920 of Lecture Notes in Computer Science , pp. 441–444, 2006. 20 Jiménez, V.M., Marzal, A., “Computing the k shortest paths:
Probabilistic and quantum finite automata with postselection
Yakaryilmaz, Abuzer
2011-01-01
We prove that endowing a real-time probabilistic or quantum computer with the ability of postselection increases its computational power. For this purpose, we provide a new model of finite automata with postselection, and compare it with the model of L\\={a}ce et al. We examine the related language classes, and also establish separations between the classical and quantum versions, and between the zero-error vs. bounded-error modes of recognition in this model.
Reinforcement Learning in Robotics: Applications and Real-World Challenges
Petar Kormushev; Sylvain Calinon; Darwin G Caldwell
2013-01-01
In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in the context of robotics, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Three recent examples for the applicatio...
Utilizing Probabilistic Linear Equations in Cube Attacks
Institute of Scientific and Technical Information of China (English)
Yuan Yao; Bin Zhang; Wen-Ling Wu
2016-01-01
Cube attacks, proposed by Dinur and Shamir at EUROCRYPT 2009, have shown huge power against stream ciphers. In the original cube attacks, a linear system of secret key bits is exploited for key recovery attacks. However, we find a number of equations claimed linear in previous literature actually nonlinear and not fit into the theoretical framework of cube attacks. Moreover, cube attacks are hard to apply if linear equations are rare. Therefore, it is of significance to make use of probabilistic linear equations, namely nonlinear superpolys that can be approximated by linear expressions effectively. In this paper, we suggest a way to test out and utilize these probabilistic linear equations, thus extending cube attacks to a wider scope. Concretely, we employ the standard parameter estimation approach and the sequential probability ratio test (SPRT) for linearity test in the preprocessing phase, and use maximum likelihood decoding (MLD) for solving the probabilistic linear equations in the online phase. As an application, we exhibit our new attack against 672 rounds of Trivium and reduce the number of key bits to search by 7.
Symbolic Computing in Probabilistic and Stochastic Analysis
Directory of Open Access Journals (Sweden)
Kamiński Marcin
2015-12-01
Full Text Available The main aim is to present recent developments in applications of symbolic computing in probabilistic and stochastic analysis, and this is done using the example of the well-known MAPLE system. The key theoretical methods discussed are (i analytical derivations, (ii the classical Monte-Carlo simulation approach, (iii the stochastic perturbation technique, as well as (iv some semi-analytical approaches. It is demonstrated in particular how to engage the basic symbolic tools implemented in any system to derive the basic equations for the stochastic perturbation technique and how to make an efficient implementation of the semi-analytical methods using an automatic differentiation and integration provided by the computer algebra program itself. The second important illustration is probabilistic extension of the finite element and finite difference methods coded in MAPLE, showing how to solve boundary value problems with random parameters in the environment of symbolic computing. The response function method belongs to the third group, where interference of classical deterministic software with the non-linear fitting numerical techniques available in various symbolic environments is displayed. We recover in this context the probabilistic structural response in engineering systems and show how to solve partial differential equations including Gaussian randomness in their coefficients.
Asteroid Risk Assessment: A Probabilistic Approach.
Reinhardt, Jason C; Chen, Xi; Liu, Wenhao; Manchev, Petar; Paté-Cornell, M Elisabeth
2016-02-01
Following the 2013 Chelyabinsk event, the risks posed by asteroids attracted renewed interest, from both the scientific and policy-making communities. It reminded the world that impacts from near-Earth objects (NEOs), while rare, have the potential to cause great damage to cities and populations. Point estimates of the risk (such as mean numbers of casualties) have been proposed, but because of the low-probability, high-consequence nature of asteroid impacts, these averages provide limited actionable information. While more work is needed to further refine its input distributions (e.g., NEO diameters), the probabilistic model presented in this article allows a more complete evaluation of the risk of NEO impacts because the results are distributions that cover the range of potential casualties. This model is based on a modularized simulation that uses probabilistic inputs to estimate probabilistic risk metrics, including those of rare asteroid impacts. Illustrative results of this analysis are presented for a period of 100 years. As part of this demonstration, we assess the effectiveness of civil defense measures in mitigating the risk of human casualties. We find that they are likely to be beneficial but not a panacea. We also compute the probability-but not the consequences-of an impact with global effects ("cataclysm"). We conclude that there is a continued need for NEO observation, and for analyses of the feasibility and risk-reduction effectiveness of space missions designed to deflect or destroy asteroids that threaten the Earth. © 2015 Society for Risk Analysis.
Probabilistic Graph Layout for Uncertain Network Visualization.
Schulz, Christoph; Nocaj, Arlind; Goertler, Jochen; Deussen, Oliver; Brandes, Ulrik; Weiskopf, Daniel
2017-01-01
We present a novel uncertain network visualization technique based on node-link diagrams. Nodes expand spatially in our probabilistic graph layout, depending on the underlying probability distributions of edges. The visualization is created by computing a two-dimensional graph embedding that combines samples from the probabilistic graph. A Monte Carlo process is used to decompose a probabilistic graph into its possible instances and to continue with our graph layout technique. Splatting and edge bundling are used to visualize point clouds and network topology. The results provide insights into probability distributions for the entire network-not only for individual nodes and edges. We validate our approach using three data sets that represent a wide range of network types: synthetic data, protein-protein interactions from the STRING database, and travel times extracted from Google Maps. Our approach reveals general limitations of the force-directed layout and allows the user to recognize that some nodes of the graph are at a specific position just by chance.
Designing and rehabilitating concrete structures - probabilistic approach
Energy Technology Data Exchange (ETDEWEB)
Edvardsen, C.; Mohr, L. [COWI Consulting Engineers and Planners AS, Lyngby (Denmark)
2000-07-01
Four examples dealing with corrosion of steel reinforcement in concrete due to chloride ingress are described, using a probabilistic approach which was developed in the recently published DuraCrete Report. The first example illustrates the difference in the required concrete cover dictated by environmental considerations. The second example concerns the update of the service life of the Great Belt Link in Denmark on the basis of measurements made five years after construction. The third example provides some design details of a tunnel in the Netherlands, while the fourth one concerns design of a column taking into account the initiation of corrosion both by means of a partial safety factor and by a probabilistic analysis. Differences in using the probabilistic approach in designing a new structure where the service life and reliability are pre-determined, and rehabilitating an existing structure where an analysis may give the answer to an estimate of the remaining service life and reliability level, are demonstrated. 9 refs., 8 tabs., 6 figs.
An, Yatong; Liu, Ziping; Zhang, Song
2016-12-01
This paper evaluates the robustness of our recently proposed geometric constraint-based phase-unwrapping method to unwrap a low-signal-to-noise ratio (SNR) phase. Instead of capturing additional images for absolute phase unwrapping, the new phase-unwrapping algorithm uses geometric constraints of the digital fringe projection (DFP) system to create a virtual reference phase map to unwrap the phase pixel by pixel. Both simulation and experimental results demonstrate that this new phase-unwrapping method can even successfully unwrap low-SNR phase maps that bring difficulties for conventional multi-frequency phase-unwrapping methods.
Neural substrates of cognitive biases during probabilistic inference.
Soltani, Alireza; Khorsand, Peyman; Guo, Clara; Farashahi, Shiva; Liu, Janet
2016-04-26
Decision making often requires simultaneously learning about and combining evidence from various sources of information. However, when making inferences from these sources, humans show systematic biases that are often attributed to heuristics or limitations in cognitive processes. Here we use a combination of experimental and modelling approaches to reveal neural substrates of probabilistic inference and corresponding biases. We find systematic deviations from normative accounts of inference when alternative options are not equally rewarding; subjects' choice behaviour is biased towards the more rewarding option, whereas their inferences about individual cues show the opposite bias. Moreover, inference bias about combinations of cues depends on the number of cues. Using a biophysically plausible model, we link these biases to synaptic plasticity mechanisms modulated by reward expectation and attention. We demonstrate that inference relies on direct estimation of posteriors, not on combination of likelihoods and prior. Our work reveals novel mechanisms underlying cognitive biases and contributions of interactions between reward-dependent learning, decision making and attention to high-level reasoning.
DEFF Research Database (Denmark)
Golestaneh, Faranak; Pinson, Pierre; Gooi, Hoay Beng
2016-01-01
Due to the inherent uncertainty involved in renewable energy forecasting, uncertainty quantification is a key input to maintain acceptable levels of reliability and profitability in power system operation. A proposal is formulated and evaluated here for the case of solar power generation, when only...... approach to generate very short-term predictive densities, i.e., for lead times between a few minutes to one hour ahead, with fast frequency updates. We rely on an Extreme Learning Machine (ELM) as a fast regression model, trained in varied ways to obtain both point and quantile forecasts of solar power...... generation. Four probabilistic methods are implemented as benchmarks. Rival approaches are evaluated based on a number of test cases for two solar power generation sites in different climatic regions, allowing us to show that our approach results in generation of skilful and reliable probabilistic forecasts...
The Decidability Frontier for Probabilistic Automata on Infinite Words
Chatterjee, Krishnendu; Tracol, Mathieu
2011-01-01
We consider probabilistic automata on infinite words with acceptance defined by safety, reachability, B\\"uchi, coB\\"uchi, and limit-average conditions. We consider quantitative and qualitative decision problems. We present extensions and adaptations of proofs for probabilistic finite automata and present a complete characterization of the decidability and undecidability frontier of the quantitative and qualitative decision problems for probabilistic automata on infinite words.
Directory of Open Access Journals (Sweden)
Maryam Iman
2017-08-01
Full Text Available Microbial remediation of nitroaromatic compounds (NACs is a promising environmentally friendly and cost-effective approach to the removal of these life-threating agents. Escherichia coli (E. coli has shown remarkable capability for the biotransformation of 2,4,6-trinitro-toluene (TNT. Efforts to develop E. coli as an efficient TNT degrading biocatalyst will benefit from holistic flux-level description of interactions between multiple TNT transforming pathways operating in the strain. To gain such an insight, we extended the genome-scale constraint-based model of E. coli to account for a curated version of major TNT transformation pathways known or evidently hypothesized to be active in E. coli in present of TNT. Using constraint-based analysis (CBA methods, we then performed several series of in silico experiments to elucidate the contribution of these pathways individually or in combination to the E. coli TNT transformation capacity. Results of our analyses were validated by replicating several experimentally observed TNT degradation phenotypes in E. coli cultures. We further used the extended model to explore the influence of process parameters, including aeration regime, TNT concentration, cell density, and carbon source on TNT degradation efficiency. We also conducted an in silico metabolic engineering study to design a series of E. coli mutants capable of degrading TNT at higher yield compared with the wild-type strain. Our study, therefore, extends the application of CBA to bioremediation of nitroaromatics and demonstrates the usefulness of this approach to inform bioremediation research.
Probabilistic logic networks a comprehensive framework for uncertain inference
Goertzel, Ben; Goertzel, Izabela Freire; Heljakka, Ari
2008-01-01
This comprehensive book describes Probabilistic Logic Networks (PLN), a novel conceptual, mathematical and computational approach to uncertain inference. A broad scope of reasoning types are considered.
Probabilistic structural analysis algorithm development for computational efficiency
Wu, Y.-T.
1991-01-01
The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.
Probabilistic forecasts based on radar rainfall uncertainty
Liguori, S.; Rico-Ramirez, M. A.
2012-04-01
The potential advantages resulting from integrating weather radar rainfall estimates in hydro-meteorological forecasting systems is limited by the inherent uncertainty affecting radar rainfall measurements, which is due to various sources of error [1-3]. The improvement of quality control and correction techniques is recognized to play a role for the future improvement of radar-based flow predictions. However, the knowledge of the uncertainty affecting radar rainfall data can also be effectively used to build a hydro-meteorological forecasting system in a probabilistic framework. This work discusses the results of the implementation of a novel probabilistic forecasting system developed to improve ensemble predictions over a small urban area located in the North of England. An ensemble of radar rainfall fields can be determined as the sum of a deterministic component and a perturbation field, the latter being informed by the knowledge of the spatial-temporal characteristics of the radar error assessed with reference to rain-gauges measurements. This approach is similar to the REAL system [4] developed for use in the Southern-Alps. The radar uncertainty estimate can then be propagated with a nowcasting model, used to extrapolate an ensemble of radar rainfall forecasts, which can ultimately drive hydrological ensemble predictions. A radar ensemble generator has been calibrated using radar rainfall data made available from the UK Met Office after applying post-processing and corrections algorithms [5-6]. One hour rainfall accumulations from 235 rain gauges recorded for the year 2007 have provided the reference to determine the radar error. Statistics describing the spatial characteristics of the error (i.e. mean and covariance) have been computed off-line at gauges location, along with the parameters describing the error temporal correlation. A system has then been set up to impose the space-time error properties to stochastic perturbations, generated in real-time at
A probabilistic tsunami hazard assessment for Indonesia
Horspool, N.; Pranantyo, I.; Griffin, J.; Latief, H.; Natawidjaja, D. H.; Kongko, W.; Cipta, A.; Bustaman, B.; Anugrah, S. D.; Thio, H. K.
2014-11-01
Probabilistic hazard assessments are a fundamental tool for assessing the threats posed by hazards to communities and are important for underpinning evidence-based decision-making regarding risk mitigation activities. Indonesia has been the focus of intense tsunami risk mitigation efforts following the 2004 Indian Ocean tsunami, but this has been largely concentrated on the Sunda Arc with little attention to other tsunami prone areas of the country such as eastern Indonesia. We present the first nationally consistent probabilistic tsunami hazard assessment (PTHA) for Indonesia. This assessment produces time-independent forecasts of tsunami hazards at the coast using data from tsunami generated by local, regional and distant earthquake sources. The methodology is based on the established monte carlo approach to probabilistic seismic hazard assessment (PSHA) and has been adapted to tsunami. We account for sources of epistemic and aleatory uncertainty in the analysis through the use of logic trees and sampling probability density functions. For short return periods (100 years) the highest tsunami hazard is the west coast of Sumatra, south coast of Java and the north coast of Papua. For longer return periods (500-2500 years), the tsunami hazard is highest along the Sunda Arc, reflecting the larger maximum magnitudes. The annual probability of experiencing a tsunami with a height of > 0.5 m at the coast is greater than 10% for Sumatra, Java, the Sunda islands (Bali, Lombok, Flores, Sumba) and north Papua. The annual probability of experiencing a tsunami with a height of > 3.0 m, which would cause significant inundation and fatalities, is 1-10% in Sumatra, Java, Bali, Lombok and north Papua, and 0.1-1% for north Sulawesi, Seram and Flores. The results of this national-scale hazard assessment provide evidence for disaster managers to prioritise regions for risk mitigation activities and/or more detailed hazard or risk assessment.
A comparison of confluence and ample sets in probabilistic and non-probabilistic branching time
Hansen, Henri; Timmer, Mark; Massink, M.; Norman, G.; Wiklicky, H.
2014-01-01
Confluence reduction and partial order reduction by means of ample sets are two different techniques for state space reduction in both traditional and probabilistic model checking. This paper provides an extensive comparison between these two methods, and answers the question how they relate in term
Hansen, Henri; Timmer, Mark
2012-01-01
Confluence reduction and partial order reduction by means of ample sets are two different techniques for state space reduction in both traditional and probabilistic model checking. This presentation provides an extensive comparison between these two methods, answering the long-standing question of h
Maximum confidence measurements via probabilistic quantum cloning
Institute of Scientific and Technical Information of China (English)
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Probabilistic analysis of a thermosetting pultrusion process
DEFF Research Database (Denmark)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper Henri
2016-01-01
process. A new application for the probabilistic analysis of the pultrusion process is introduced using the response surface method (RSM). The results obtained from the RSM are validated by employing the Monte Carlo simulation (MCS) with Latin hypercube sampling technique. According to the results......In the present study, the effects of uncertainties in the material properties of the processing composite material and the resin kinetic parameters, as well as process parameters such as pulling speed and inlet temperature, on product quality (exit degree of cure) are investigated for a pultrusion...
Quantum correlations support probabilistic pure state cloning
Energy Technology Data Exchange (ETDEWEB)
Roa, Luis, E-mail: lroa@udec.cl [Departamento de Física, Universidad de Concepción, Casilla 160-C, Concepción (Chile); Alid-Vaccarezza, M.; Jara-Figueroa, C. [Departamento de Física, Universidad de Concepción, Casilla 160-C, Concepción (Chile); Klimov, A.B. [Departamento de Física, Universidad de Guadalajara, Avenida Revolución 1500, 44420 Guadalajara, Jalisco (Mexico)
2014-02-01
The probabilistic scheme for making two copies of two nonorthogonal pure states requires two auxiliary systems, one for copying and one for attempting to project onto the suitable subspace. The process is performed by means of a unitary-reduction scheme which allows having a success probability of cloning different from zero. The scheme becomes optimal when the probability of success is maximized. In this case, a bipartite state remains as a free degree which does not affect the probability. We find bipartite states for which the unitarity does not introduce entanglement, but does introduce quantum discord between some involved subsystems.
Probabilistic Analysis of the Quality Calculus
DEFF Research Database (Denmark)
Nielson, Hanne Riis; Nielson, Flemming
2013-01-01
We consider a fragment of the Quality Calculus, previously introduced for defensive programming of software components such that it becomes natural to plan for default behaviour in case the ideal behaviour fails due to unreliable communication. This paper develops a probabilistically based trust...... analysis supporting the Quality Calculus. It uses information about the probabilities that expected input will be absent in order to determine the trustworthiness of the data used for controlling the distributed system; the main challenge is to take accord of the stochastic dependency between some...
Signature recognition using neural network probabilistic
Directory of Open Access Journals (Sweden)
Heri Nurdiyanto
2016-03-01
Full Text Available The signature of each person is different and has unique characteristics. Thus, this paper discusses the development of a personal identification system based on it is unique digital signature. The process of preprocessing used gray scale method, while Shannon Entropy and Probabilistic Neural Network are used respectively for feature extraction and identification. This study uses five signature types with five signatures in every type. While the test results compared to actual data compared to real data, the proposed system performance was only 40%.
Probabilistic results for a mobile service scenario
DEFF Research Database (Denmark)
Møller, Jesper; Yiu, Man Lung
We consider the following stochastic model for a mobile service scenario. Consider a stationary Poisson process in Rd, with its points radially ordered with respect to the origin (the anchor); if d = 2, the points may correspond to locations of e.g. restaurants. A user, with a location different...... the inferred privacy region is a random set obtained by an adversary who only knows the anchor and the points received from the server, where the adversary ‘does the best' to infer the possible locations of the user. Probabilistic results related to the communication cost and the inferred privacy region...
Probabilistic Recovery Guarantees for Sparsely Corrupted Signals
Pope, Graeme; Studer, Christoph
2012-01-01
We consider the recovery of sparse signals subject to sparse interference, as introduced in Studer et al., IEEE Trans. IT, 2012. We present novel probabilistic recovery guarantees for this framework, covering varying degrees of knowledge of the signal and interference support, which are relevant for a large number of practical applications. Our results assume that the sparsifying dictionaries are solely characterized by coherence parameters and we require randomness only in the signal and/or interference. The obtained recovery guarantees show that one can recover sparsely corrupted signals with overwhelming probability, even if the sparsity of both the signal and interference scale (near) linearly with the number of measurements.
Probabilistic double guarantee kidnapping detection in SLAM.
Tian, Yang; Ma, Shugen
2016-01-01
For determining whether kidnapping has happened and which type of kidnapping it is while a robot performs autonomous tasks in an unknown environment, a double guarantee kidnapping detection (DGKD) method has been proposed. The good performance of DGKD in a relative small environment is shown. However, a limitation of DGKD is found in a large-scale environment by our recent work. In order to increase the adaptability of DGKD in a large-scale environment, an improved method called probabilistic double guarantee kidnapping detection is proposed in this paper to combine probability of features' positions and the robot's posture. Simulation results demonstrate the validity and accuracy of the proposed method.
Probabilistic assessment of pressurised thermal shocks
Energy Technology Data Exchange (ETDEWEB)
Pištora, Vladislav, E-mail: pis@ujv.cz; Pošta, Miroslav; Lauerová, Dana
2014-04-01
Rector pressure vessel (RPV) is a key component of all PWR and VVER nuclear power plants (NPPs). Assuring its integrity is therefore of high importance. Due to high neutron fluence the RPV material is embrittled during NPP operation. The embrittled RPV may undergo severe loading during potential events of the type of pressurised thermal shock (PTS), possibly occurring in the NPP. The resistance of RPV against fast fracture has to be proven by comprehensive analyses. In most countries (with exception of the USA), proving RPV integrity is based on the deterministic PTS assessment. In the USA, the “screening criteria” for maximum allowable embrittlement of RPV material, which form part of the USA regulations, are based on the probabilistic PTS assessment. In other countries, probabilistic PTS assessment is performed only at research level or as supplementary to the deterministic PTS assessment for individual RPVs. In this paper, description of complete probabilistic PTS assessment for a VVER 1000 RPV is presented, in particular, both the methodology and the results are attached. The methodology corresponds to the Unified Procedure for Lifetime Assessment of Components and Piping in WWER NPPs, “VERLIFE”, Version 2008. The main parameters entering the analysis, which are treated as statistical distributions, are as follows: -initial value of material reference temperature T{sub 0}, -reference temperature shift ΔT{sub 0} due to neutron fluence, -neutron fluence, -size, shape, position and density of cracks in the RPV wall, -fracture toughness of RPV material (Master Curve concept is used). The first step of the analysis consists in selection of sequences potentially leading to PTS, their grouping, establishing their frequencies, and selecting of representative scenarios within all groups. Modified PSA model is used for this purpose. The second step consists in thermal hydraulic analyses of the representative scenarios, with the goal to prepare input data for the
Probabilistic Design of Offshore Structural Systems
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
1988-01-01
Probabilistic design of structural systems is considered in this paper. The reliability is estimated using first-order reliability methods (FORM). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements...... satisfies given requirements or such that the systems reliability satisfies a given requirement. Based on a sensitivity analysis optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability-based optimization problem sequentially using quasi...
Quantitative analysis of probabilistic BPMN workflows
DEFF Research Database (Denmark)
Herbert, Luke Thomas; Sharp, Robin
2012-01-01
We present a framework for modelling and analysis of realworld business workflows. We present a formalised core subset of the Business Process Modelling and Notation (BPMN) and then proceed to extend this language with probabilistic nondeterministic branching and general-purpose reward annotations...... of events, reward-based properties and best- and worst- case scenarios. We develop a simple example of medical workflow and demonstrate the utility of this analysis in accurate provisioning of drug stocks. Finally, we suggest a path to building upon these techniques to cover the entire BPMN language, allow...
Probabilistic modeling of solar power systems
Safie, Fayssal M.
1989-01-01
The author presents a probabilistic approach based on Markov chain theory to model stand-alone photovoltaic power systems and predict their long-term service performance. The major advantage of this approach is that it allows designers and developers of these systems to analyze the system performance as well as the battery subsystem performance in the long run and determine the system design requirements that meet a specified service performance level. The methodology presented is illustrated by using data for a radio repeater system for the Boston, Massachusetts, location.
Ensemble postprocessing for probabilistic quantitative precipitation forecasts
Bentzien, S.; Friederichs, P.
2012-12-01
Precipitation is one of the most difficult weather variables to predict in hydrometeorological applications. In order to assess the uncertainty inherent in deterministic numerical weather prediction (NWP), meteorological services around the globe develop ensemble prediction systems (EPS) based on high-resolution NWP systems. With non-hydrostatic model dynamics and without parameterization of deep moist convection, high-resolution NWP models are able to describe convective processes in more detail and provide more realistic mesoscale structures. However, precipitation forecasts are still affected by displacement errors, systematic biases and fast error growth on small scales. Probabilistic guidance can be achieved from an ensemble setup which accounts for model error and uncertainty of initial and boundary conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) provides such an ensemble system based on the German-focused limited-area model COSMO-DE. With a horizontal grid-spacing of 2.8 km, COSMO-DE is the convection-permitting high-resolution part of the operational model chain at DWD. The COSMO-DE-EPS consists of 20 realizations of COSMO-DE, driven by initial and boundary conditions derived from 4 global models and 5 perturbations of model physics. Ensemble systems like COSMO-DE-EPS are often limited with respect to ensemble size due to the immense computational costs. As a consequence, they can be biased and exhibit insufficient ensemble spread, and probabilistic forecasts may be not well calibrated. In this study, probabilistic quantitative precipitation forecasts are derived from COSMO-DE-EPS and evaluated at more than 1000 rain gauges located all over Germany. COSMO-DE-EPS is a frequently updated ensemble system, initialized 8 times a day. We use the time-lagged approach to inexpensively increase ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Moreover, we will show that statistical
Probabilistic remote state preparation by W states
Institute of Scientific and Technical Information of China (English)
Liu Jin-Ming; Wang Yu-Zhu
2004-01-01
In this paper we consider a scheme for probabilistic remote state preparation of a general qubit by using W states. The scheme consists of the sender, Alice, and two remote receivers Bob and Carol. Alice performs a projective measurement on her qubit in the basis spanned by the state she wants to prepare and its orthocomplement. This allows either Bob or Carol to reconstruct the state with finite success probability. It is shown that for some special ensembles of qubits, the remote state preparation scheme requires only two classical bits, unlike the case in the scheme of quantum teleportation where three classical bits are needed.
Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome
Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah
2017-06-01
Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.
FPGA implementation of a pyramidal Weightless Neural Networks learning system.
Al-Alawi, Raida
2003-08-01
A hardware architecture of a Probabilistic Logic Neuron (PLN) is presented. The suggested model facilitates the on-chip learning of pyramidal Weightless Neural Networks using a modified probabilistic search reward/penalty training algorithm. The penalization strategy of the training algorithm depends on a predefined parameter called the probabilistic search interval. A complete Weightless Neural Network (WNN) learning system is modeled and implemented on Xilinx XC4005E Field Programmable Gate Array (FPGA), allowing its architecture to be configurable. Various experiments have been conducted to examine the feasibility and performance of the WNN learning system. Results show that the system has a fast convergence rate and good generalization ability.
Fayssal, Safie; Weldon, Danny
2008-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program called Constellation to send crew and cargo to the international Space Station, to the moon, and beyond. As part of the Constellation program, a new launch vehicle, Ares I, is being developed by NASA Marshall Space Flight Center. Designing a launch vehicle with high reliability and increased safety requires a significant effort in understanding design variability and design uncertainty at the various levels of the design (system, element, subsystem, component, etc.) and throughout the various design phases (conceptual, preliminary design, etc.). In a previous paper [1] we discussed a probabilistic functional failure analysis approach intended mainly to support system requirements definition, system design, and element design during the early design phases. This paper provides an overview of the application of probabilistic engineering methods to support the detailed subsystem/component design and development as part of the "Design for Reliability and Safety" approach for the new Ares I Launch Vehicle. Specifically, the paper discusses probabilistic engineering design analysis cases that had major impact on the design and manufacturing of the Space Shuttle hardware. The cases represent important lessons learned from the Space Shuttle Program and clearly demonstrate the significance of probabilistic engineering analysis in better understanding design deficiencies and identifying potential design improvement for Ares I. The paper also discusses the probabilistic functional failure analysis approach applied during the early design phases of Ares I and the forward plans for probabilistic design analysis in the detailed design and development phases.
Blind RRT: A probabilistically complete distributed RRT
Rodriguez, Cesar
2013-11-01
Rapidly-Exploring Random Trees (RRTs) have been successful at finding feasible solutions for many types of problems. With motion planning becoming more computationally demanding, we turn to parallel motion planning for efficient solutions. Existing work on distributed RRTs has been limited by the overhead that global communication requires. A recent approach, Radial RRT, demonstrated a scalable algorithm that subdivides the space into regions to increase the computation locality. However, if an obstacle completely blocks RRT growth in a region, the planning space is not covered and is thus not probabilistically complete. We present a new algorithm, Blind RRT, which ignores obstacles during initial growth to efficiently explore the entire space. Because obstacles are ignored, free components of the tree become disconnected and fragmented. Blind RRT merges parts of the tree that have become disconnected from the root. We show how this algorithm can be applied to the Radial RRT framework allowing both scalability and effectiveness in motion planning. This method is a probabilistically complete approach to parallel RRTs. We show that our method not only scales but also overcomes the motion planning limitations that Radial RRT has in a series of difficult motion planning tasks. © 2013 IEEE.
Probabilistic Route Selection Algorithm for IP Traceback
Yim, Hong-Bin; Jung, Jae-Il
DoS(Denial of Service) or DDoS(Distributed DoS) attack is a major threaten and the most difficult problem to solve among many attacks. Moreover, it is very difficult to find a real origin of attackers because DoS/DDoS attacker uses spoofed IP addresses. To solve this problem, we propose a probabilistic route selection traceback algorithm, namely PRST, to trace the attacker's real origin. This algorithm uses two types of packets such as an agent packet and a reply agent packet. The agent packet is in use to find the attacker's real origin and the reply agent packet is in use to notify to a victim that the agent packet is reached the edge router of the attacker. After attacks occur, the victim generates the agent packet and sends it to a victim's edge router. The attacker's edge router received the agent packet generates the reply agent packet and send it to the victim. The agent packet and the reply agent packet is forwarded refer to probabilistic packet forwarding table (PPFT) by routers. The PRST algorithm runs on the distributed routers and PPFT is stored and managed by routers. We validate PRST algorithm by using mathematical approach based on Poisson distribution.
Operational General Relativity: Possibilistic, Probabilistic, and Quantum
Hardy, Lucien
2016-01-01
In this paper we develop an operational formulation of General Relativity similar in spirit to existing operational formulations of Quantum Theory. To do this we introduce an operational space (or op-space) built out of scalar fields. A point in op-space corresponds to some nominated set of scalar fields taking some given values in coincidence. We assert that op-space is the space in which we observe the world. We introduce also a notion of agency (this corresponds to the ability to set knob settings just like in Operational Quantum Theory). The effects of agents' actions should only be felt to the future so we introduce also a time direction field. Agency and time direction can be understood as effective notions. We show how to formulate General Relativity as a possibilistic theory and as a probabilistic theory. In the possibilistic case we provide a compositional framework for calculating whether some operationally described situation is possible or not. In the probabilistic version we introduce probabiliti...
Probabilistic Transcriptome Assembly and Variant Graph Genotyping
DEFF Research Database (Denmark)
Sibbesen, Jonas Andreas
The introduction of second-generation sequencing, has in recent years allowed the biological community to determine the genomes and transcriptomes of organisms and individuals at an unprecedented rate. However, almost every step in the sequencing protocol introduces uncertainties in how the resul......The introduction of second-generation sequencing, has in recent years allowed the biological community to determine the genomes and transcriptomes of organisms and individuals at an unprecedented rate. However, almost every step in the sequencing protocol introduces uncertainties in how...... the resulting sequencing data should be interpreted. This has over the years spurred the development of many probabilistic methods that are capable of modelling dierent aspects of the sequencing process. Here, I present two of such methods that were developed to each tackle a dierent problem in bioinformatics...... that this approach outperforms existing state-of-the-art methods measured using sensitivity and precision on both simulated and real data. The second is a novel probabilistic method that uses exact alignment of k-mers to a set of variants graphs to provide unbiased estimates of genotypes in a population...
A probabilistic bridge safety evaluation against floods.
Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho
2016-01-01
To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.
Probabilistic Load Flow Considering Wind Generation Uncertainty
Directory of Open Access Journals (Sweden)
R. Ramezani
2011-10-01
Full Text Available Renewable energy sources, such as wind, solar and hydro, are increasingly incorporated into power grids, as a direct consequence of energy and environmental issues. These types of energies are variable and intermittent by nature and their exploitation introduces uncertainties into the power grid. Therefore, probabilistic analysis of the system performance is of significant interest. This paper describes a new approach to Probabilistic Load Flow (PLF by modifying the Two Point Estimation Method (2PEM to cover some drawbacks of other currently used methods. The proposed method is examined using two case studies, the IEEE 9-bus and the IEEE 57-bus test systems. In order to justify the effectiveness of the method, numerical comparison with Monte Carlo Simulation (MCS method is presented. Simulation results indicate that the proposed method significantly reduces the computational burden while maintaining a high level of accuracy. Moreover, that the unsymmetrical 2PEM has a higher level of accuracy than the symmetrical 2PEM with equal computing burden, when the Probability Density Function (PDF of uncertain variables is asymmetric.
Probabilistic adaptation in changing microbial environments
Directory of Open Access Journals (Sweden)
Yarden Katz
2016-12-01
Full Text Available Microbes growing in animal host environments face fluctuations that have elements of both randomness and predictability. In the mammalian gut, fluctuations in nutrient levels and other physiological parameters are structured by the host’s behavior, diet, health and microbiota composition. Microbial cells that can anticipate environmental fluctuations by exploiting this structure would likely gain a fitness advantage (by adapting their internal state in advance. We propose that the problem of adaptive growth in structured changing environments, such as the gut, can be viewed as probabilistic inference. We analyze environments that are “meta-changing”: where there are changes in the way the environment fluctuates, governed by a mechanism unobservable to cells. We develop a dynamic Bayesian model of these environments and show that a real-time inference algorithm (particle filtering for this model can be used as a microbial growth strategy implementable in molecular circuits. The growth strategy suggested by our model outperforms heuristic strategies, and points to a class of algorithms that could support real-time probabilistic inference in natural or synthetic cellular circuits.
Recent advances in probabilistic species pool delineations
Directory of Open Access Journals (Sweden)
Dirk Nikolaus Karger
2016-07-01
Full Text Available A species pool is the set of species that could potentially colonize and establish within a community. It has been a commonly used concept in biogeography since the early days of MacArthur and Wilson’s work on Island Biogeography. Despite their simple and appealing definition, an operational application of species pools is bundled with a multitude of problems, which have often resulted in arbitrary decisions and workarounds when defining species pools. Two recently published papers address the operational problems of species pool delineations, and show ways of delineating them in a probabilistic fashion. In both papers, species pools were delineated using a process-based, mechanistical approach, which opens the door for a multitude of new applications in biogeography. Such applications include detecting the hidden signature of biotic interactions, disentangling the geographical structure of community assembly processes, and incorporating a temporal extent into species pools. Although similar in their conclusions, both ‘probabilistic approaches’ differ in their implementation and definitions. Here I give a brief overview of the differences and similarities of both approaches, and identify the challenges and advantages in their application.
Modelling structured data with Probabilistic Graphical Models
Forbes, F.
2016-05-01
Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.
Damage identification with probabilistic neural networks
Energy Technology Data Exchange (ETDEWEB)
Klenke, S.E.; Paez, T.L.
1995-12-01
This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework, it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.
Probabilistic Principal Component Analysis for Metabolomic Data.
LENUS (Irish Health Repository)
Nyamundanda, Gift
2010-11-23
Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.
Probabilistic Seismic Hazard Analysis for Yemen
Directory of Open Access Journals (Sweden)
Rakesh Mohindra
2012-01-01
Full Text Available A stochastic-event probabilistic seismic hazard model, which can be used further for estimates of seismic loss and seismic risk analysis, has been developed for the territory of Yemen. An updated composite earthquake catalogue has been compiled using the databases from two basic sources and several research publications. The spatial distribution of earthquakes from the catalogue was used to define and characterize the regional earthquake source zones for Yemen. To capture all possible scenarios in the seismic hazard model, a stochastic event set has been created consisting of 15,986 events generated from 1,583 fault segments in the delineated seismic source zones. Distribution of horizontal peak ground acceleration (PGA was calculated for all stochastic events considering epistemic uncertainty in ground-motion modeling using three suitable ground motion-prediction relationships, which were applied with equal weight. The probabilistic seismic hazard maps were created showing PGA and MSK seismic intensity at 10% and 50% probability of exceedance in 50 years, considering local soil site conditions. The resulting PGA for 10% probability of exceedance in 50 years (return period 475 years ranges from 0.2 g to 0.3 g in western Yemen and generally is less than 0.05 g across central and eastern Yemen. The largest contributors to Yemen’s seismic hazard are the events from the West Arabian Shield seismic zone.
Probabilistic Model for Fatigue Crack Growth in Welded Bridge Details
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard; Yalamas, Thierry
2013-01-01
In the present paper a probabilistic model for fatigue crack growth in welded steel details in road bridges is presented. The probabilistic model takes the influence of bending stresses in the joints into account. The bending stresses can either be introduced by e.g. misalignment or redistributio...
An ellipsoid algorithm for probabilistic robust controller design
Kanev, S.K.; de Schutter, B.; Verhaegen, M.H.G.
2003-01-01
In this paper, a new iterative approach to probabilistic robust controller design is presented, which is applicable to any robust controller/filter design problem that can be represented as an LMI feasibility problem. Recently, a probabilistic Subgradient Iteration algorithm was proposed for solving
Pre-Processing Rules for Triangulation of Probabilistic Networks
Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den
2003-01-01
The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a network’s graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique siz
Pre-processing for Triangulation of Probabilistic Networks
Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der
2001-01-01
The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique
The JCSS probabilistic model code: Experience and recent developments
Chryssanthopoulos, M.; Diamantidis, D.; Vrouwenvelder, A.C.W.M.
2003-01-01
The JCSS Probabilistic Model Code (JCSS-PMC) has been available for public use on the JCSS website (www.jcss.ethz.ch) for over two years. During this period, several examples have been worked out and new probabilistic models have been added. Since the engineering community has already been exposed t
Solving stochastic multiobjective vehicle routing problem using probabilistic metaheuristic
Directory of Open Access Journals (Sweden)
Gannouni Asmae
2017-01-01
closed form expression. This novel approach is based on combinatorial probability and can be incorporated in a multiobjective evolutionary algorithm. (iiProvide probabilistic approaches to elitism and diversification in multiobjective evolutionary algorithms. Finally, The behavior of the resulting Probabilistic Multi-objective Evolutionary Algorithms (PrMOEAs is empirically investigated on the multi-objective stochastic VRP problem.
Probabilistic G-Metric space and some fixed point results
Directory of Open Access Journals (Sweden)
A. R. Janfada
2013-01-01
Full Text Available In this note we introduce the notions of generalized probabilistic metric spaces and generalized Menger probabilistic metric spaces. After making our elementary observations and proving some basic properties of these spaces, we are going to prove some fixed point result in these spaces.
The Role of Language in Building Probabilistic Thinking
Nacarato, Adair Mendes; Grando, Regina Célia
2014-01-01
This paper is based on research that investigated the development of probabilistic language and thinking by students 10-12 years old. The focus was on the adequate use of probabilistic terms in social practice. A series of tasks was developed for the investigation and completed by the students working in groups. The discussions were video recorded…
Hybrid Probabilistic Logics: Theoretical Aspects, Algorithms and Experiments
Michels, S.
2016-01-01
Steffen Michels Hybrid Probabilistic Logics: Theoretical Aspects, Algorithms and Experiments Probabilistic logics aim at combining the properties of logic, that is they provide a structured way of expressing knowledge and a mechanical way of reasoning about such knowledge, with the ability of prob
Error Immune Logic for Low-Power Probabilistic Computing
Directory of Open Access Journals (Sweden)
Bo Marr
2010-01-01
design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.
Pre-processing for Triangulation of Probabilistic Networks
Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der
2001-01-01
The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique
Extension of contractive maps in the Menger probabilistic metric space
Energy Technology Data Exchange (ETDEWEB)
Razani, Abdolrahman [Department of Mathematics, Faculty of Science, Imam Khomeini International University, P.O. Box 34194-288 Qazvin (Iran, Islamic Republic of)]. E-mail: razani@ipm.ir; Fouladgar, Kaveh [Stanford University, Mathematics Building 380, 450 Serra Mall, Stanford, CA 94305-2125 (United States)]. E-mail: kfouladgar@yahoo.com
2007-12-15
In this article, the topological properties of the Menger probabilistic metric spaces and the mappings between these spaces are studied. In addition, contractive and k-contractive mappings are introduced. As an application, a new fixed point theorem in a chainable Menger probabilistic metric space is proved.
Understanding Probabilistic Thinking: The Legacy of Efraim Fischbein.
Greer, Brian
2001-01-01
Honors the contribution of Efraim Fischbein to the study and analysis of probabilistic thinking. Summarizes Fischbein's early work, then focuses on the role of intuition in mathematical and scientific thinking; the development of probabilistic thinking; and the influence of instruction on that development. (Author/MM)
Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint
Energy Technology Data Exchange (ETDEWEB)
Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-08-31
Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.
Some ideas for learning CP-theories
2008-01-01
Causal Probabilistic logic (CP-logic) is a language for describing complex probabilistic processes. In this talk we consider the problem of learning CP-theories from data. We briefly discuss three possible approaches. First, we review the existing algorithm by Meert et al. Second, we show how simple CP-theories can be learned by using the learning algorithm for Logical Bayesian Networks and converting the result into a CP-theory. Third, we argue that for learning more complex CP-theories, an ...
Abstract Interpretation for Probabilistic Termination of Biological Systems
Gori, Roberta; 10.4204/EPTCS.11.9
2009-01-01
In a previous paper the authors applied the Abstract Interpretation approach for approximating the probabilistic semantics of biological systems, modeled specifically using the Chemical Ground Form calculus. The methodology is based on the idea of representing a set of experiments, which differ only for the initial concentrations, by abstracting the multiplicity of reagents present in a solution, using intervals. In this paper, we refine the approach in order to address probabilistic termination properties. More in details, we introduce a refinement of the abstract LTS semantics and we abstract the probabilistic semantics using a variant of Interval Markov Chains. The abstract probabilistic model safely approximates a set of concrete experiments and reports conservative lower and upper bounds for probabilistic termination.
Probabilistic programming in Python using PyMC3
Directory of Open Access Journals (Sweden)
John Salvatier
2016-04-01
Full Text Available Probabilistic programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC sampling allow inference on increasingly complex models. This class of MCMC, known as Hamiltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source probabilistic programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other probabilistic programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package.
Takashima, Hiroyuki; Mimura, Norio; Ohkubo, Tadayasu; Yoshida, Takuya; Tamaoki, Haruhiko; Kobayashi, Yuji
2004-04-14
Distributed computing has been implemented to the solution structure determination of endothelin-1 to evaluate efficiency of the method for NMR constraint-based structure calculations. A key target of the investigation was determination of the C-terminal folding of the peptide, which had been dispersed in previous studies of NMR, despite its pharmacological significances. With use of tens of thousands of random initial structures to explore the conformational space comprehensively, we determined high-resolution structures with good convergences of C-terminal as well as previously defined N-terminal structures. The previous studies had missed the C-terminal convergence because of initial structure dependencies trapped in localized folding of the N-terminal region, which are strongly constricted by two disulfide bonds.
En-INCA: Towards an integrated probabilistic nowcasting system
Suklitsch, Martin; Stuhl, Barbora; Kann, Alexander; Bica, Benedikt
2014-05-01
INCA (Integrated Nowcasting through Comprehensive Analysis), the analysis and nowcasting system operated by ZAMG, is based on blending observations and NWP data. Its performance is extremely high in the nowcasting range. However, uncertainties can be large even in the very short term and limit its practical use. Severe weather conditions are particularly demanding, which is why the quantification of uncertainties and determining probabilities of event occurrences are adding value for various applications. The Nowcasting Ensemble System En-INCA achieves this by coupling the INCA nowcast with ALADIN-LAEF, the EPS of the local area model ALADIN operated at ZAMG successfully for years already. In En-INCA, the Nowcasting approach of INCA is blended with different EPS members in order to derive an ensemble of forecasts in the nowcasting range. In addition to NWP based uncertainties also specific perturbations with respect to observations, the analysis and nowcasting techniques are discussed, and the influence of learning from errors in previous nowcasts is shown. En-INCA is a link between INCA and ALADIN-LAEF by merging the advantages of both systems: observation based nowcasting at very high resolution on the one hand and the uncertainty estimation of a state-of-the-art LAM-EPS on the other hand. Probabilistic nowcasting products can support various end users, e.g. civil protection agencies and power industry, to optimize their decision making process.
Learning System of Web Navigation Patterns through Hypertext Probabilistic Grammars
Cortes Vasquez, Augusto
2015-01-01
One issue of real interest in the area of web data mining is to capture users' activities during connection and extract behavior patterns that help define their preferences in order to improve the design of future pages adapting websites interfaces to individual users. This research is intended to provide, first of all, a presentation of the…
Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study †
Tîrnăucă, Cristina; Montaña, José L.; Ontañón, Santiago; González, Avelino J.; Pardo, Luis M.
2016-01-01
Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent’s actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches. PMID:27347956
Zhang, Lei; Zeng, Zhi; Ji, Qiang
2011-09-01
Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.
Analyzing State Sequences with Probabilistic Suffix Trees: The PST R Package
Directory of Open Access Journals (Sweden)
Alexis Gabadinho
2016-08-01
Full Text Available This article presents the PST R package for categorical sequence analysis with probabilistic suffix trees (PSTs, i.e., structures that store variable-length Markov chains (VLMCs. VLMCs allow to model high-order dependencies in categorical sequences with parsimonious models based on simple estimation procedures. The package is specifically adapted to the field of social sciences, as it allows for VLMC models to be learned from sets of individual sequences possibly containing missing values; in addition, the package is extended to account for case weights. This article describes how a VLMC model is learned from one or more categorical sequences and stored in a PST. The PST can then be used for sequence prediction, i.e., to assign a probability to whole observed or artificial sequences. This feature supports data mining applications such as the extraction of typical patterns and outliers. This article also introduces original visualization tools for both the model and the outcomes of sequence prediction. Other features such as functions for pattern mining and artificial sequence generation are described as well. The PST package also allows for the computation of probabilistic divergence between two models and the fitting of segmented VLMCs, where sub-models fitted to distinct strata of the learning sample are stored in a single PST.
Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques
Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili
2009-01-01
In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…
Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques
Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili
2009-01-01
In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…
Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
Directory of Open Access Journals (Sweden)
Sharma Gaurav
2007-04-01
Full Text Available Abstract Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for
Opportunities of probabilistic flood loss models
Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno
2016-04-01
Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved
Development of probabilistic internal dosimetry computer code
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki
2017-02-01
Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of
Multivariate postprocessing techniques for probabilistic hydrological forecasting
Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian
2016-04-01
Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both mean and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS; Gneiting et al., 2005) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. These approaches comprise ensemble copula coupling (ECC; Schefzik et al., 2013), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA; Pinson and Girard, 2012), which estimates the temporal correlations from training observations. Both methods are tested in a case study covering three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. The results indicate that both ECC and GCA are suitable for modelling the temporal dependences of probabilistic hydrologic forecasts (Hemri et al., 2015). References Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman (2005), Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133(5), 1098-1118, DOI: 10.1175/MWR2904.1. Hemri, S., D. Lisniak, and B. Klein, Multivariate postprocessing techniques for probabilistic hydrological forecasting, Water Resources Research, 51(9), 7436-7451, DOI: 10.1002/2014WR016473. Pinson, P., and R. Girard (2012), Evaluating the quality of scenarios of short-term wind power
Energy Technology Data Exchange (ETDEWEB)
Ramirez Vera, M. L.; Perez Mulas, A.; Delgado, J. M.; Barrientos Ontero, M.; Somoano, F.; Alvarez Garcia, C.; Rodriguez Marti, M.
2011-07-01
The understanding of accidents that have occurred in radiotherapy and the lessons learned from them are very useful to prevent repetition, but there are other risks that have not been detected to date. With a view to identifying and preventing such risks, proactive methods successfully applied in other fields, such as probabilistic safety assessment (PSA), have been developed. (Author)
Bloch, Isabelle
2010-01-01
The area of information fusion has grown considerably during the last few years, leading to a rapid and impressive evolution. In such fast-moving times, it is important to take stock of the changes that have occurred. As such, this books offers an overview of the general principles and specificities of information fusion in signal and image processing, as well as covering the main numerical methods (probabilistic approaches, fuzzy sets and possibility theory and belief functions).
Reduction Mappings between Probabilistic Boolean Networks
Directory of Open Access Journals (Sweden)
Ivan Ivanov
2004-01-01
Full Text Available Probabilistic Boolean networks (PBNs comprise a model describing a directed graph with rule-based dependences between its nodes. The rules are selected, based on a given probability distribution which provides a flexibility when dealing with the uncertainty which is typical for genetic regulatory networks. Given the computational complexity of the model, the characterization of mappings reducing the size of a given PBN becomes a critical issue. Mappings between PBNs are important also from a theoretical point of view. They provide means for developing a better understanding about the dynamics of PBNs. This paper considers two kinds of mappings reduction and projection and their effect on the original probability structure of a given PBN.
Probabilistic Catalogs for Crowded Stellar Fields
Brewer, Brendon J; Hogg, David W
2012-01-01
We introduce a probabilistic (Bayesian) method for producing catalogs from images of crowded stellar fields. The method is capable of inferring the number of sources (N) in the image and can also handle the challenges introduced by overlapping sources. The luminosity function of the stars can also be inferred even when the precise luminosity of each star is uncertain. This is in contrast with standard techniques which produce a single catalog, potentially underestimating the uncertainties in any study of the stellar population and discarding information about sources at or below the detection limit. The method is implemented using advanced Markov Chain Monte Carlo (MCMC) techniques including Reversible Jump and Nested Sampling. The computational feasibility of the method is demonstrated on simulated data where the luminosity function of the stars is a broken power-law. The parameters of the luminosity function can be recovered with moderate uncertainties. We compare the results obtained from our method with t...
Probabilistic approach to study the hydroformed sheet
Directory of Open Access Journals (Sweden)
Mohammed Nassraoui
2015-08-01
Full Text Available Under the leadership of the Kyoto agreements on reducing emissions of greenhouse gases, the automotive sector was forced to review its methods and production technologies in order to meet the new environmental standards. In fuel consumption reduction is an immediate way to reduce the emission of polluting gases. In this paper, the study of the formability of sheet submitted to the hydroforming process is proposed. The numerical results are given to validate the proposed approach. To show the influence of uncertainties in the study process, we take some characteristics of the material as random and the probabilistic approach is done. The finding results are showing the effectiveness of the proposed approach.
Adaptive Probabilistic Flooding for Multipath Routing
Betoule, Christophe; Clavier, Remi; Rossi, Dario; Rossini, Giuseppe; Thouenon, Gilles
2011-01-01
In this work, we develop a distributed source routing algorithm for topology discovery suitable for ISP transport networks, that is however inspired by opportunistic algorithms used in ad hoc wireless networks. We propose a plug-and-play control plane, able to find multiple paths toward the same destination, and introduce a novel algorithm, called adaptive probabilistic flooding, to achieve this goal. By keeping a small amount of state in routers taking part in the discovery process, our technique significantly limits the amount of control messages exchanged with flooding -- and, at the same time, it only minimally affects the quality of the discovered multiple path with respect to the optimal solution. Simple analytical bounds, confirmed by results gathered with extensive simulation on four realistic topologies, show our approach to be of high practical interest.
Probabilistic risk assessment of disassembly procedures
Energy Technology Data Exchange (ETDEWEB)
O`Brien, D.A.; Bement, T.R.; Letellier, B.C.
1993-11-01
The purpose of this report is to describe the use of Probabilistic Risk (Safety) Assessment (PRA or PSA) at a Department of Energy (DOE) facility. PRA is a methodology for (i) identifying combinations of events that, if they occur, lead to accidents (ii) estimating the frequency of occurrence of each combination of events and (iii) estimating the consequences of each accident. Specifically the study focused on evaluating the risks associated with dissembling a hazardous assembly. The PRA for the operation included a detailed evaluation only for those potential accident sequences which could lead to significant off-site consequences and affect public health. The overall purpose of this study was to investigate the feasibility of establishing a risk-consequence goal for DOE operations.
Probabilistic solution of relative entropy weighted control
Bierkens, Joris
2012-01-01
We show that stochastic control problems with a particular cost structure involving a relative entropy term admit a purely probabilistic solution, without the necessity of applying the dynamic programming principle. The argument is as follows. Minimization of the expectation of a random variable with respect to the underlying probability measure, penalized by relative entropy, may be solved exactly. In the case where the randomness is generated by a standard Brownian motion, this exact solution can be written as a Girsanov density. The stochastic process appearing in the Girsanov exponent has the role of control process, and the relative entropy of the change of probability measure is equal to the integral of the square of this process. An explicit expression for the control process may be obtained in terms of the Malliavin derivative of the density process. The theory is applied to the problem of minimizing the maximum of a Brownian motion (penalized by the relative entropy), leading to an explicit expressio...
Probabilistic sampling of finite renewal processes
Antunes, Nelson; 10.3150/10-BEJ321
2012-01-01
Consider a finite renewal process in the sense that interrenewal times are positive i.i.d. variables and the total number of renewals is a random variable, independent of interrenewal times. A finite point process can be obtained by probabilistic sampling of the finite renewal process, where each renewal is sampled with a fixed probability and independently of other renewals. The problem addressed in this work concerns statistical inference of the original distributions of the total number of renewals and interrenewal times from a sample of i.i.d. finite point processes obtained by sampling finite renewal processes. This problem is motivated by traffic measurements in the Internet in order to characterize flows of packets (which can be seen as finite renewal processes) and where the use of packet sampling is becoming prevalent due to increasing link speeds and limited storage and processing capacities.
Monolingual Probabilistic Programming Using Generalized Coroutines
Kiselyov, Oleg
2012-01-01
Probabilistic programming languages and modeling toolkits are two modular ways to build and reuse stochastic models and inference procedures. Combining strengths of both, we express models and inference as generalized coroutines in the same general-purpose language. We use existing facilities of the language, such as rich libraries, optimizing compilers, and types, to develop concise, declarative, and realistic models with competitive performance on exact and approximate inference. In particular, a wide range of models can be expressed using memoization. Because deterministic parts of models run at full speed, custom inference procedures are trivial to incorporate, and inference procedures can reason about themselves without interpretive overhead. Within this framework, we introduce a new, general algorithm for importance sampling with look-ahead.
Probabilistic view clustering in object recognition
Camps, Octavia I.; Christoffel, Douglas W.; Pathak, Anjali
1992-11-01
To recognize objects and to determine their poses in a scene we need to find correspondences between the features extracted from the image and those of the object models. Models are commonly represented by describing a few characteristic views of the object representing groups of views with similar properties. Most feature-based matching schemes assume that all the features that are potentially visible in a view will appear with equal probability, and the resulting matching algorithms have to allow for 'errors' without really understanding what they mean. PREMIO is an object recognition system that uses CAD models of 3D objects and knowledge of surface reflectance properties, light sources, sensor characteristics, and feature detector algorithms to estimate the probability of the features being detectable and correctly matched. The purpose of this paper is to describe the predictions generated by PREMIO, how they are combined into a single probabilistic model, and illustrative examples showing its use in object recognition.