#### Sample records for learning probability trees

1. Analytical and numerical studies of creation probabilities of hierarchical trees

Directory of Open Access Journals (Sweden)

S.S. Borysov

2011-03-01

Full Text Available We consider the creation conditions of diverse hierarchical trees both analytically and numerically. A connection between the probabilities to create hierarchical levels and the probability to associate these levels into a united structure is studied. We argue that a consistent probabilistic picture requires the use of deformed algebra. Our consideration is based on the study of the main types of hierarchical trees, among which both regular and degenerate ones are studied analytically, while the creation probabilities of Fibonacci, scale-free and arbitrary trees are determined numerically.

2. Maximum parsimony, substitution model, and probability phylogenetic trees.

Science.gov (United States)

Weng, J F; Thomas, D A; Mareels, I

2011-01-01

The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

3. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

Science.gov (United States)

Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

2011-01-01

Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

4. Probability intervals for the top event unavailability of fault trees

International Nuclear Information System (INIS)

Lee, Y.T.; Apostolakis, G.E.

1976-06-01

The evaluation of probabilities of rare events is of major importance in the quantitative assessment of the risk from large technological systems. In particular, for nuclear power plants the complexity of the systems, their high reliability and the lack of significant statistical records have led to the extensive use of logic diagrams in the estimation of low probabilities. The estimation of probability intervals for the probability of existence of the top event of a fault tree is examined. Given the uncertainties of the primary input data, a method is described for the evaluation of the first four moments of the top event occurrence probability. These moments are then used to estimate confidence bounds by several approaches which are based on standard inequalities (e.g., Tchebycheff, Cantelli, etc.) or on empirical distributions (the Johnson family). Several examples indicate that the Johnson family of distributions yields results which are in good agreement with those produced by Monte Carlo simulation

5. Python for probability, statistics, and machine learning

CERN Document Server

Unpingco, José

2016-01-01

This book covers the key ideas that link probability, statistics, and machine learning illustrated using Python modules in these areas. The entire text, including all the figures and numerical results, is reproducible using the Python codes and their associated Jupyter/IPython notebooks, which are provided as supplementary downloads. The author develops key intuitions in machine learning by working meaningful examples using multiple analytical methods and Python codes, thereby connecting theoretical concepts to concrete implementations. Modern Python modules like Pandas, Sympy, and Scikit-learn are applied to simulate and visualize important machine learning concepts like the bias/variance trade-off, cross-validation, and regularization. Many abstract mathematical ideas, such as convergence in probability theory, are developed and illustrated with numerical examples. This book is suitable for anyone with an undergraduate-level exposure to probability, statistics, or machine learning and with rudimentary knowl...

6. Meta-learning in decision tree induction

CERN Document Server

Grąbczewski, Krzysztof

2014-01-01

The book focuses on different variants of decision tree induction but also describes  the meta-learning approach in general which is applicable to other types of machine learning algorithms. The book discusses different variants of decision tree induction and represents a useful source of information to readers wishing to review some of the techniques used in decision tree learning, as well as different ensemble methods that involve decision trees. It is shown that the knowledge of different components used within decision tree learning needs to be systematized to enable the system to generate and evaluate different variants of machine learning algorithms with the aim of identifying the top-most performers or potentially the best one. A unified view of decision tree learning enables to emulate different decision tree algorithms simply by setting certain parameters. As meta-learning requires running many different processes with the aim of obtaining performance results, a detailed description of the experimen...

7. STRIP: stream learning of influence probabilities

DEFF Research Database (Denmark)

Kutzkov, Konstantin

2013-01-01

cascades, and developing applications such as viral marketing. Motivated by modern microblogging platforms, such as twitter, in this paper we study the problem of learning influence probabilities in a data-stream scenario, in which the network topology is relatively stable and the challenge of a learning...... algorithm is to keep up with a continuous stream of tweets using a small amount of time and memory. Our contribution is a number of randomized approximation algorithms, categorized according to the available space (superlinear, linear, and sublinear in the number of nodes n) and according to dierent models...

8. Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty

International Nuclear Information System (INIS)

Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro

2015-01-01

Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis

9. Calculating the probability of multitaxon evolutionary trees: bootstrappers Gambit.

OpenAIRE

Lake, J A

1995-01-01

The reconstruction of multitaxon trees from molecular sequences is confounded by the variety of algorithms and criteria used to evaluate trees, making it difficult to compare the results of different analyses. A global method of multitaxon phylogenetic reconstruction described here, Bootstrappers Gambit, can be used with any four-taxon algorithm, including distance, maximum likelihood, and parsimony methods. It incorporates a Bayesian-Jeffreys'-bootstrap analysis to provide a uniform probabil...

10. Probability distribution of long-run indiscriminate felling of trees in ...

African Journals Online (AJOL)

The study was undertaken to determine the probability distribution of Long-run indiscriminate felling of trees in northern senatorial district of Adamawa State. Specifically, the study focused on examining the future direction of indiscriminate felling of trees as well as its equilibrium distribution. A multi-stage and simple random ...

11. Blind Students' Learning of Probability through the Use of a Tactile Model

Science.gov (United States)

Vita, Aida Carvalho; Kataoka, Verônica Yumi

2014-01-01

The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…

12. Are baboons learning "orthographic" representations? Probably not.

Directory of Open Access Journals (Sweden)

Maja Linke

Full Text Available The ability of Baboons (papio papio to distinguish between English words and nonwords has been modeled using a deep learning convolutional network model that simulates a ventral pathway in which lexical representations of different granularity develop. However, given that pigeons (columba livia, whose brain morphology is drastically different, can also be trained to distinguish between English words and nonwords, it appears that a less species-specific learning algorithm may be required to explain this behavior. Accordingly, we examined whether the learning model of Rescorla and Wagner, which has proved to be amazingly fruitful in understanding animal and human learning could account for these data. We show that a discrimination learning network using gradient orientation features as input units and word and nonword units as outputs succeeds in predicting baboon lexical decision behavior-including key lexical similarity effects and the ups and downs in accuracy as learning unfolds-with surprising precision. The models performance, in which words are not explicitly represented, is remarkable because it is usually assumed that lexicality decisions, including the decisions made by baboons and pigeons, are mediated by explicit lexical representations. By contrast, our results suggest that in learning to perform lexical decision tasks, baboons and pigeons do not construct a hierarchy of lexical units. Rather, they make optimal use of low-level information obtained through the massively parallel processing of gradient orientation features. Accordingly, we suggest that reading in humans first involves initially learning a high-level system building on letter representations acquired from explicit instruction in literacy, which is then integrated into a conventionalized oral communication system, and that like the latter, fluent reading involves the massively parallel processing of the low-level features encoding semantic contrasts.

13. Probability & Statistics: Modular Learning Exercises. Teacher Edition

Science.gov (United States)

Actuarial Foundation, 2012

2012-01-01

The purpose of these modules is to provide an introduction to the world of probability and statistics to accelerated mathematics students at the high school level. The modules also introduce students to real world math concepts and problems that property and casualty actuaries come across in their work. They are designed to be used by teachers and…

14. Probability & Statistics: Modular Learning Exercises. Student Edition

Science.gov (United States)

Actuarial Foundation, 2012

2012-01-01

The purpose of these modules is to provide an introduction to the world of probability and statistics to accelerated mathematics students at the high school level. The materials are centered on the fictional town of Happy Shores, a coastal community which is at risk for hurricanes. Actuaries at an insurance company figure out the risks and…

15. Mathematical analysis and modeling of epidemics of rubber tree root diseases: Probability of infection of an individual tree

Energy Technology Data Exchange (ETDEWEB)

Chadoeuf, J.; Joannes, H.; Nandris, D.; Pierrat, J.C.

1988-12-01

The spread of root diseases in rubber tree (Hevea brasiliensis) due to Rigidoporus lignosus and Phellinus noxius was investigated epidemiologically using data collected every 6 month during a 6-year survey in a plantation. The aim of the present study is to see what factors could predict whether a given tree would be infested at the following inspection. Using a qualitative regression method we expressed the probability of pathogenic attack on a tree in terms of three factors: the state of health of the surrounding trees, the method used to clear the forest prior to planting, and evolution with time. The effects of each factor were ranked, and the roles of the various classes of neighbors were established and quantified. Variability between successive inspections was small, and the method of forest clearing was important only while primary inocula in the soil were still infectious. The state of health of the immediate neighbors was most significant; more distant neighbors in the same row had some effect; interrow spread was extremely rare. This investigation dealt only with trees as individuals, and further study of the interrelationships of groups of trees is needed.

16. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

Science.gov (United States)

Doubravsky, Karel; Dohnal, Mirko

2015-01-01

Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

17. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

Directory of Open Access Journals (Sweden)

Karel Doubravsky

Full Text Available Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (rechecked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

18. Probability Modeling and Thinking: What Can We Learn from Practice?

Science.gov (United States)

Pfannkuch, Maxine; Budgett, Stephanie; Fewster, Rachel; Fitch, Marie; Pattenwise, Simeon; Wild, Chris; Ziedins, Ilze

2016-01-01

Because new learning technologies are enabling students to build and explore probability models, we believe that there is a need to determine the big enduring ideas that underpin probabilistic thinking and modeling. By uncovering the elements of the thinking modes of expert users of probability models we aim to provide a base for the setting of…

19. Fostering Positive Attitude in Probability Learning Using Graphing Calculator

Science.gov (United States)

Tan, Choo-Kim; Harji, Madhubala Bava; Lau, Siong-Hoe

2011-01-01

Although a plethora of research evidence highlights positive and significant outcomes of the incorporation of the Graphing Calculator (GC) in mathematics education, its use in the teaching and learning process appears to be limited. The obvious need to revisit the teaching and learning of Probability has resulted in this study, i.e. to incorporate…

20. Rooting phylogenetic trees under the coalescent model using site pattern probabilities.

Science.gov (United States)

Tian, Yuan; Kubatko, Laura

2017-12-19

Phylogenetic tree inference is a fundamental tool to estimate ancestor-descendant relationships among different species. In phylogenetic studies, identification of the root - the most recent common ancestor of all sampled organisms - is essential for complete understanding of the evolutionary relationships. Rooted trees benefit most downstream application of phylogenies such as species classification or study of adaptation. Often, trees can be rooted by using outgroups, which are species that are known to be more distantly related to the sampled organisms than any other species in the phylogeny. However, outgroups are not always available in evolutionary research. In this study, we develop a new method for rooting species tree under the coalescent model, by developing a series of hypothesis tests for rooting quartet phylogenies using site pattern probabilities. The power of this method is examined by simulation studies and by application to an empirical North American rattlesnake data set. The method shows high accuracy across the simulation conditions considered, and performs well for the rattlesnake data. Thus, it provides a computationally efficient way to accurately root species-level phylogenies that incorporates the coalescent process. The method is robust to variation in substitution model, but is sensitive to the assumption of a molecular clock. Our study establishes a computationally practical method for rooting species trees that is more efficient than traditional methods. The method will benefit numerous evolutionary studies that require rooting a phylogenetic tree without having to specify outgroups.

1. Predicting the probability of mortality of gastric cancer patients using decision tree.

Science.gov (United States)

Mohammadzadeh, F; Noorkojuri, H; Pourhoseingholi, M A; Saadat, S; Baghestani, A R

2015-06-01

Gastric cancer is the fourth most common cancer worldwide. This reason motivated us to investigate and introduce gastric cancer risk factors utilizing statistical methods. The aim of this study was to identify the most important factors influencing the mortality of patients who suffer from gastric cancer disease and to introduce a classification approach according to decision tree model for predicting the probability of mortality from this disease. Data on 216 patients with gastric cancer, who were registered in Taleghani hospital in Tehran,Iran, were analyzed. At first, patients were divided into two groups: the dead and alive. Then, to fit decision tree model to our data, we randomly selected 20% of dataset to the test sample and remaining dataset considered as the training sample. Finally, the validity of the model examined with sensitivity, specificity, diagnosis accuracy and the area under the receiver operating characteristic curve. The CART version 6.0 and SPSS version 19.0 softwares were used for the analysis of the data. Diabetes, ethnicity, tobacco, tumor size, surgery, pathologic stage, age at diagnosis, exposure to chemical weapons and alcohol consumption were determined as effective factors on mortality of gastric cancer. The sensitivity, specificity and accuracy of decision tree were 0.72, 0.75 and 0.74 respectively. The indices of sensitivity, specificity and accuracy represented that the decision tree model has acceptable accuracy to prediction the probability of mortality in gastric cancer patients. So a simple decision tree consisted of factors affecting on mortality of gastric cancer may help clinicians as a reliable and practical tool to predict the probability of mortality in these patients.

2. Semantic and associative factors in probability learning with words.

Science.gov (United States)

Schipper, L M; Hanson, B L; Taylor, G; Thorpe, J A

1973-09-01

Using a probability-learning technique with a single word as the cue and with the probability of a given event following this word fixed at .80, it was found (1) that neither high nor low associates to the original word and (2) that neither synonyms nor antonyms showed differential learning curves subsequent to original learning when the probability for the following event was shifted to .20. In a second study when feedback, in the form of knowledge of results, was withheld, there was a clear-cut similarity of predictions to the originally trained word and the synonyms of both high and low association value and a dissimilarity of these words to a set of antonyms of both high and low association value. Two additional studies confirmed the importance of the semantic dimension as compared with association value as traditionally measured.

3. Feedback Valence Affects Auditory Perceptual Learning Independently of Feedback Probability

Science.gov (United States)

Amitay, Sygal; Moore, David R.; Molloy, Katharine; Halliday, Lorna F.

2015-01-01

Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners’ responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability. PMID:25946173

4. Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs.

Science.gov (United States)

Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang

2017-06-17

Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12-1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15-1.26 times the storage utilization efficiency compared with other schemes.

5. Statistical learning of action: the role of conditional probability.

Science.gov (United States)

Meyer, Meredith; Baldwin, Dare

2011-12-01

Identification of distinct units within a continuous flow of human action is fundamental to action processing. Such segmentation may rest in part on statistical learning. In a series of four experiments, we examined what types of statistics people can use to segment a continuous stream involving many brief, goal-directed action elements. The results of Experiment 1 showed no evidence for sensitivity to conditional probability, whereas Experiment 2 displayed learning based on joint probability. In Experiment 3, we demonstrated that additional exposure to the input failed to engender sensitivity to conditional probability. However, the results of Experiment 4 showed that a subset of adults-namely, those more successful at identifying actions that had been seen more frequently than comparison sequences-were also successful at learning conditional-probability statistics. These experiments help to clarify the mechanisms subserving processing of intentional action, and they highlight important differences from, as well as similarities to, prior studies of statistical learning in other domains, including language.

6. Learning about Posterior Probability: Do Diagrams and Elaborative Interrogation Help?

Science.gov (United States)

Clinton, Virginia; Alibali, Martha W.; Nathan, Mitchell J.

2016-01-01

To learn from a text, students must make meaningful connections among related ideas in that text. This study examined the effectiveness of two methods of improving connections--elaborative interrogation and diagrams--in written lessons about posterior probability. Undergraduate students (N = 198) read a lesson in one of three questioning…

7. Probability Learning: Changes in Behavior across Time and Development

Science.gov (United States)

Plate, Rista C.; Fulvio, Jacqueline M.; Shutts, Kristin; Green, C. Shawn; Pollak, Seth D.

2018-01-01

Individuals track probabilities, such as associations between events in their environments, but less is known about the degree to which experience--within a learning session and over development--influences people's use of incoming probabilistic information to guide behavior in real time. In two experiments, children (4-11 years) and adults…

8. PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION

Data.gov (United States)

National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...

9. Probability

CERN Document Server

Shiryaev, A N

1996-01-01

This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

10. Estimating the Probability of Vegetation to Be Groundwater Dependent Based on the Evaluation of Tree Models

Directory of Open Access Journals (Sweden)

Isabel C. Pérez Hoyos

2016-04-01

Full Text Available Groundwater Dependent Ecosystems (GDEs are increasingly threatened by humans’ rising demand for water resources. Consequently, it is imperative to identify the location of GDEs to protect them. This paper develops a methodology to identify the probability of an ecosystem to be groundwater dependent. Probabilities are obtained by modeling the relationship between the known locations of GDEs and factors influencing groundwater dependence, namely water table depth and climatic aridity index. Probabilities are derived for the state of Nevada, USA, using modeled water table depth and aridity index values obtained from the Global Aridity database. The model selected results from the performance comparison of classification trees (CT and random forests (RF. Based on a threshold-independent accuracy measure, RF has a better ability to generate probability estimates. Considering a threshold that minimizes the misclassification rate for each model, RF also proves to be more accurate. Regarding training accuracy, performance measures such as accuracy, sensitivity, and specificity are higher for RF. For the test set, higher values of accuracy and kappa for CT highlight the fact that these measures are greatly affected by low prevalence. As shown for RF, the choice of the cutoff probability value has important consequences on model accuracy and the overall proportion of locations where GDEs are found.

11. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

Science.gov (United States)

Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

2018-03-01

The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

12. USING RASCH ANALYSIS TO EXPLORE WHAT STUDENTS LEARN ABOUT PROBABILITY CONCEPTS

Directory of Open Access Journals (Sweden)

Zamalia Mahmud

2015-01-01

Full Text Available Students’ understanding of probability concepts have been investigated from various different perspectives. This study was set out to investigate perceived understanding of probability concepts of forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW. Rasch measurement which is based on a probabilistic model was used to identify concepts that students find easy, moderate and difficult to understand.  Data were captured from the e-learning Moodle platform where students provided their responses through an on-line quiz. As illustrated in the Rasch map, 96% of the students could understand about sample space, simple events, mutually exclusive events and tree diagram while 67% of the students found concepts of conditional and independent events rather easy to understand.Keywords: Perceived Understanding, Probability Concepts, Rasch Measurement Model DOI: dx.doi.org/10.22342/jme.61.1

13. Estimating the probability of survival of individual shortleaf pine (Pinus echinata mill.) trees

Science.gov (United States)

Sudip Shrestha; Thomas B. Lynch; Difei Zhang; James M. Guldin

2012-01-01

A survival model is needed in a forest growth system which predicts the survival of trees on individual basis or on a stand basis (Gertner, 1989). An individual-tree modeling approach is one of the better methods available for predicting growth and yield as it provides essential information about particular tree species; tree size, tree quality and tree present status...

14. Bayesian selection of misspecified models is overconfident and may cause spurious posterior probabilities for phylogenetic trees.

Science.gov (United States)

Yang, Ziheng; Zhu, Tianqi

2018-02-20

The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.

15. Prostate Cancer Probability Prediction By Machine Learning Technique.

Science.gov (United States)

Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena

2017-11-26

The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.

16. Some Limit Properties of Random Transition Probability for Second-Order Nonhomogeneous Markov Chains Indexed by a Tree

Directory of Open Access Journals (Sweden)

Shi Zhiyan

2009-01-01

Full Text Available We study some limit properties of the harmonic mean of random transition probability for a second-order nonhomogeneous Markov chain and a nonhomogeneous Markov chain indexed by a tree. As corollary, we obtain the property of the harmonic mean of random transition probability for a nonhomogeneous Markov chain.

17. USING RASCH ANALYSIS TO EXPLORE WHAT STUDENTS LEARN ABOUT PROBABILITY CONCEPTS

Directory of Open Access Journals (Sweden)

Zamalia Mahmud

2015-01-01

Full Text Available Students’ understanding of probability concepts have been investigated from various different perspectives. This study was set out to investigate perceived understanding of probability concepts of forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW. Rasch measurement which is based on a probabilistic model was used to identify concepts that students find easy, moderate and difficult to understand. Data were captured from the e-learning Moodle platform where students provided their responses through an on-line quiz. As illustrated in the Rasch map, 96% of the students could understand about sample space, simple events, mutually exclusive events and tree diagram while 67% of the students found concepts of conditional and independent events rather easy to understand

18. Using Rasch Analysis To Explore What Students Learn About Probability Concepts

Directory of Open Access Journals (Sweden)

Zamalia Mahmud

2015-01-01

Full Text Available Students’ understanding of probability concepts have been investigated from various different perspectives. This study was set out to investigate perceived understanding of probability concepts of forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW. Rasch measurement which is based on a probabilistic model was used to identify concepts that students find easy, moderate and difficult to understand. Data were captured from the e-learning Moodle platform where students provided their responses through an on-line quiz. As illustrated in the Rasch map, 96% of the students could understand about sample space, simple events, mutually exclusive events and tree diagram while 67% of the students found concepts of conditional and independent events rather easy to understand.

19. Supervised learning of probability distributions by neural networks

Science.gov (United States)

Baum, Eric B.; Wilczek, Frank

1988-01-01

Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

20. Automated Sleep Stage Scoring by Decision Tree Learning

National Research Council Canada - National Science Library

Hanaoka, Masaaki

2001-01-01

In this paper we describe a waveform recognition method that extracts characteristic parameters from wave- forms and a method of automated sleep stage scoring using decision tree learning that is in...

1. Learning Type Extension Trees for Metal Bonding State Prediction

DEFF Research Database (Denmark)

Frasconi, Paolo; Jaeger, Manfred; Passerini, Andrea

2008-01-01

Type Extension Trees (TET) have been recently introduced as an expressive representation language allowing to encode complex combinatorial features of relational entities. They can be efficiently learned with a greedy search strategy driven by a generalized relational information gain and a discr......Type Extension Trees (TET) have been recently introduced as an expressive representation language allowing to encode complex combinatorial features of relational entities. They can be efficiently learned with a greedy search strategy driven by a generalized relational information gain...

2. α-Cut method based importance measure for criticality analysis in fuzzy probability – Based fault tree analysis

International Nuclear Information System (INIS)

Purba, Julwan Hendry; Sony Tjahyani, D.T.; Widodo, Surip; Tjahjono, Hendro

2017-01-01

Highlights: •FPFTA deals with epistemic uncertainty using fuzzy probability. •Criticality analysis is important for reliability improvement. •An α-cut method based importance measure is proposed for criticality analysis in FPFTA. •The α-cut method based importance measure utilises α-cut multiplication, α-cut subtraction, and area defuzzification technique. •Benchmarking confirm that the proposed method is feasible for criticality analysis in FPFTA. -- Abstract: Fuzzy probability – based fault tree analysis (FPFTA) has been recently developed and proposed to deal with the limitations of conventional fault tree analysis. In FPFTA, reliabilities of basic events, intermediate events and top event are characterized by fuzzy probabilities. Furthermore, the quantification of the FPFTA is based on fuzzy multiplication rule and fuzzy complementation rule to propagate uncertainties from basic event to the top event. Since the objective of the fault tree analysis is to improve the reliability of the system being evaluated, it is necessary to find the weakest path in the system. For this purpose, criticality analysis can be implemented. Various importance measures, which are based on conventional probabilities, have been developed and proposed for criticality analysis in fault tree analysis. However, not one of those importance measures can be applied for criticality analysis in FPFTA, which is based on fuzzy probability. To be fully applied in nuclear power plant probabilistic safety assessment, FPFTA needs to have its corresponding importance measure. The objective of this study is to develop an α-cut method based importance measure to evaluate and rank the importance of basic events for criticality analysis in FPFTA. To demonstrate the applicability of the proposed measure, a case study is performed and its results are then benchmarked to the results generated by the four well known importance measures in conventional fault tree analysis. The results

3. Discrete probability models and methods probability on graphs and trees, Markov chains and random fields, entropy and coding

CERN Document Server

Brémaud, Pierre

2017-01-01

The emphasis in this book is placed on general models (Markov chains, random fields, random graphs), universal methods (the probabilistic method, the coupling method, the Stein-Chen method, martingale methods, the method of types) and versatile tools (Chernoff's bound, Hoeffding's inequality, Holley's inequality) whose domain of application extends far beyond the present text. Although the examples treated in the book relate to the possible applications, in the communication and computing sciences, in operations research and in physics, this book is in the first instance concerned with theory. The level of the book is that of a beginning graduate course. It is self-contained, the prerequisites consisting merely of basic calculus (series) and basic linear algebra (matrices). The reader is not assumed to be trained in probability since the first chapters give in considerable detail the background necessary to understand the rest of the book. .

4. Constructing multi-labelled decision trees for junction design using the predicted probabilities

NARCIS (Netherlands)

Bezembinder, Erwin M.; Wismans, Luc J. J.; Van Berkum, Eric C.

2017-01-01

In this paper, we evaluate the use of traditional decision tree algorithms CRT, CHAID and QUEST to determine a decision tree which can be used to predict a set of (Pareto optimal) junction design alternatives (e.g. signal or roundabout) for a given traffic demand pattern and available space. This is

5. Anderson transition on the Cayley tree as a traveling wave critical point for various probability distributions

International Nuclear Information System (INIS)

Monthus, Cecile; Garel, Thomas

2009-01-01

For Anderson localization on the Cayley tree, we study the statistics of various observables as a function of the disorder strength W and the number N of generations. We first consider the Landauer transmission T N . In the localized phase, its logarithm follows the traveling wave form T N ≅(ln T N )-bar + ln t* where (i) the disorder-averaged value moves linearly (ln(T N ))-bar≅-N/ξ loc and the localization length diverges as ξ loc ∼(W-W c ) -ν loc with ν loc = 1 and (ii) the variable t* is a fixed random variable with a power-law tail P*(t*) ∼ 1/(t*) 1+β(W) for large t* with 0 N are governed by rare events. In the delocalized phase, the transmission T N remains a finite random variable as N → ∞, and we measure near criticality the essential singularity (ln(T ∞ ))-bar∼-|W c -W| -κ T with κ T ∼ 0.25. We then consider the statistical properties of normalized eigenstates Σ x |ψ(x)| 2 = 1, in particular the entropy S = -Σ x |ψ(x)| 2 ln |ψ(x)| 2 and the inverse participation ratios (IPR) I q = Σ x |ψ(x)| 2q . In the localized phase, the typical entropy diverges as S typ ∼( W-W c ) -ν S with ν S ∼ 1.5, whereas it grows linearly as S typ (N) ∼ N in the delocalized phase. Finally for the IPR, we explain how closely related variables propagate as traveling waves in the delocalized phase. In conclusion, both the localized phase and the delocalized phase are characterized by the traveling wave propagation of some probability distributions, and the Anderson localization/delocalization transition then corresponds to a traveling/non-traveling critical point. Moreover, our results point toward the existence of several length scales that diverge with different exponents ν at criticality

6. Learning of Behavior Trees for Autonomous Agents

OpenAIRE

Colledanchise, Michele; Parasuraman, Ramviyas; Ögren, Petter

2015-01-01

Definition of an accurate system model for Automated Planner (AP) is often impractical, especially for real-world problems. Conversely, off-the-shelf planners fail to scale up and are domain dependent. These drawbacks are inherited from conventional transition systems such as Finite State Machines (FSMs) that describes the action-plan execution generated by the AP. On the other hand, Behavior Trees (BTs) represent a valid alternative to FSMs presenting many advantages in terms of modularity, ...

7. New machine learning tools for predictive vegetation mapping after climate change: Bagging and Random Forest perform better than Regression Tree Analysis

Science.gov (United States)

L.R. Iverson; A.M. Prasad; A. Liaw

2004-01-01

More and better machine learning tools are becoming available for landscape ecologists to aid in understanding species-environment relationships and to map probable species occurrence now and potentially into the future. To thal end, we evaluated three statistical models: Regression Tree Analybib (RTA), Bagging Trees (BT) and Random Forest (RF) for their utility in...

8. Tree mortality estimates and species distribution probabilities in southeastern United States forests

Science.gov (United States)

Martin A. Spetich; Zhaofei Fan; Zhen Sui; Michael Crosby; Hong S. He; Stephen R. Shifley; Theodor D. Leininger; W. Keith Moser

2017-01-01

Stresses to trees under a changing climate can lead to changes in forest tree survival, mortality and distribution.Â  For instance, a study examining the effects of human-induced climate change on forest biodiversity by Hansen and others (2001) predicted a 32% reduction in loblollyâshortleaf pine habitat across the eastern United States.Â  However, they also...

9. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

NARCIS (Netherlands)

Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

2012-01-01

PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

10. Runtime Optimizations for Tree-Based Machine Learning Models

NARCIS (Netherlands)

N. Asadi; J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)

2014-01-01

htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression

11. Practical secure decision tree learning in a teletreatment application

NARCIS (Netherlands)

de Hoogh, Sebastiaan; Schoenmakers, Berry; Chen, Ping; op den Akker, Harm

In this paper we develop a range of practical cryptographic protocols for secure decision tree learning, a primary problem in privacy preserving data mining. We focus on particular variants of the well-known ID3 algorithm allowing a high level of security and performance at the same time. Our

12. Practical secure decision tree learning in a teletreatment application

NARCIS (Netherlands)

Hoogh, de S.J.A.; Schoenmakers, B.; Chen, Ping; Op den Akker, H.; Christin, N.; Safavi-Naini, R.

2014-01-01

In this paper we develop a range of practical cryptographic protocols for secure decision tree learning, a primary problem in privacy preserving data mining. We focus on particular variants of the well-known ID3 algorithm allowing a high level of security and performance at the same time. Our

13. Living Classrooms: Learning Guide for Famous & Historic Trees.

Science.gov (United States)

American Forest Foundation, Washington, DC.

This guide provides information to create and care for a Famous and Historic Trees Living Classroom in which students learn American history and culture in the context of environmental change. The booklet contains 10 hands-on activities that emphasize observation, critical thinking, and teamwork. Worksheets and illustrations provide students with…

14. Probability of bystander effect induced by alpha-particles emitted by radon progeny using the analytical model of tracheobronchial tree

International Nuclear Information System (INIS)

Jovanovic, B.; Nikezic, D.

2010-01-01

Radiation-induced biological bystander effects have become a phenomenon associated with the interaction of radiation with cells. There is a need to include the influence of biological effects in the dosimetry of the human lung. With this aim, the purpose of this work is to calculate the probability of bystander effect induced by alpha-particle radiation on sensitive cells of the human lung. Probability was calculated by applying the analytical model cylinder bifurcation, which was created to simulate the geometry of the human lung with the geometric distribution of cell nuclei in the airway wall of the tracheobronchial tree. This analytical model of the human tracheobronchial tree represents the extension of the ICRP 66 model, and follows it as much as possible. Reported probabilities are calculated for various targets and alpha-particle energies. Probability of bystander effect has been calculated for alpha particles with 6 and 7.69 MeV energies, which are emitted in the 222 Rn chain. The application of these results may enhance current dose risk estimation approaches in the sense of the inclusion of the influence of the biological effects. (authors)

15. Metabolite identification through multiple kernel learning on fragmentation trees.

Science.gov (United States)

Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

2014-06-15

Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. © The Author 2014. Published by Oxford University Press.

16. Quantum probability and cognitive modeling: some cautions and a promising direction in modeling physics learning.

Science.gov (United States)

Franceschetti, Donald R; Gire, Elizabeth

2013-06-01

Quantum probability theory offers a viable alternative to classical probability, although there are some ambiguities inherent in transferring the quantum formalism to a less determined realm. A number of physicists are now looking at the applicability of quantum ideas to the assessment of physics learning, an area particularly suited to quantum probability ideas.

17. Probability Distribution of Long-run Indiscriminate Felling of Trees in ...

African Journals Online (AJOL)

Bright

conditionally independent of every prior state given the current state (Obodos, ... of events or experiments in which the probability of occurrence for an event ... represent the exhaustive and mutually exclusive outcomes (states) of a system at.

18. Utilising Tree-Based Ensemble Learning for Speaker Segmentation

DEFF Research Database (Denmark)

Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

2014-01-01

In audio and speech processing, accurate detection of the changing points between multiple speakers in speech segments is an important stage for several applications such as speaker identification and tracking. Bayesian Information Criteria (BIC)-based approaches are the most traditionally used...... for a certain condition, the model becomes biased to the data used for training limiting the model’s generalisation ability. In this paper, we propose a BIC-based tuning-free approach for speaker segmentation through the use of ensemble-based learning. A forest of segmentation trees is constructed in which each...... tree is trained using a sampled version of the speech segment. During the tree construction process, a set of randomly selected points in the input sequence is examined as potential segmentation points. The point that yields the highest ΔBIC is chosen and the same process is repeated for the resultant...

19. Extensions and applications of ensemble-of-trees methods in machine learning

Science.gov (United States)

Bleich, Justin

Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of

20. Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification

Directory of Open Access Journals (Sweden)

Huiwu Luo

2015-01-01

Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.

1. Deep Multi-Task Learning for Tree Genera Classification

Science.gov (United States)

Ko, C.; Kang, J.; Sohn, G.

2018-05-01

The goal for our paper is to classify tree genera using airborne Light Detection and Ranging (LiDAR) data with Convolution Neural Network (CNN) - Multi-task Network (MTN) implementation. Unlike Single-task Network (STN) where only one task is assigned to the learning outcome, MTN is a deep learning architect for learning a main task (classification of tree genera) with other tasks (in our study, classification of coniferous and deciduous) simultaneously, with shared classification features. The main contribution of this paper is to improve classification accuracy from CNN-STN to CNN-MTN. This is achieved by introducing a concurrence loss (Lcd) to the designed MTN. This term regulates the overall network performance by minimizing the inconsistencies between the two tasks. Results show that we can increase the classification accuracy from 88.7 % to 91.0 % (from STN to MTN). The second goal of this paper is to solve the problem of small training sample size by multiple-view data generation. The motivation of this goal is to address one of the most common problems in implementing deep learning architecture, the insufficient number of training data. We address this problem by simulating training dataset with multiple-view approach. The promising results from this paper are providing a basis for classifying a larger number of dataset and number of classes in the future.

2. METAPHOR: Probability density estimation for machine learning based photometric redshifts

Science.gov (United States)

Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

2017-06-01

We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

3. The Effect of Simulation-Based Learning on Prospective Teachers' Inference Skills in Teaching Probability

Science.gov (United States)

Koparan, Timur; Yilmaz, Gül Kaleli

2015-01-01

The effect of simulation-based probability teaching on the prospective teachers' inference skills has been examined with this research. In line with this purpose, it has been aimed to examine the design, implementation and efficiency of a learning environment for experimental probability. Activities were built on modeling, simulation and the…

4. Learning difficulties of senior high school students based on probability understanding levels

Science.gov (United States)

Anggara, B.; Priatna, N.; Juandi, D.

2018-05-01

Identifying students' difficulties in learning concept of probability is important for teachers to prepare the appropriate learning processes and can overcome obstacles that may arise in the next learning processes. This study revealed the level of students' understanding of the concept of probability and identified their difficulties as a part of the epistemological obstacles identification of the concept of probability. This study employed a qualitative approach that tends to be the character of descriptive research involving 55 students of class XII. In this case, the writer used the diagnostic test of probability concept learning difficulty, observation, and interview as the techniques to collect the data needed. The data was used to determine levels of understanding and the learning difficulties experienced by the students. From the result of students' test result and learning observation, it was found that the mean cognitive level was at level 2. The findings indicated that students had appropriate quantitative information of probability concept but it might be incomplete or incorrectly used. The difficulties found are the ones in arranging sample space, events, and mathematical models related to probability problems. Besides, students had difficulties in understanding the principles of events and prerequisite concept.

5. Learning a constrained conditional random field for enhanced segmentation of fallen trees in ALS point clouds

Science.gov (United States)

Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

2018-06-01

In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.

6. Recognizing human actions by learning and matching shape-motion prototype trees.

Science.gov (United States)

Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

2012-03-01

A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

7. Pure perceptual-based learning of second-, third-, and fourth-order sequential probabilities.

Science.gov (United States)

Remillard, Gilbert

2011-07-01

There is evidence that sequence learning in the traditional serial reaction time task (SRTT), where target location is the response dimension, and sequence learning in the perceptual SRTT, where target location is not the response dimension, are handled by different mechanisms. The ability of the latter mechanism to learn sequential contingencies that can be learned by the former mechanism was examined. Prior research has established that people can learn second-, third-, and fourth-order probabilities in the traditional SRTT. The present study reveals that people can learn such probabilities in the perceptual SRTT. This suggests that the two mechanisms may have similar architectures. A possible neural basis of the two mechanisms is discussed.

8. Difficulties of learning probability concepts, the reasons why these concepts cannot be learned and suggestions for solution

Directory of Open Access Journals (Sweden)

Dilek Sezgin MEMNUN

2008-06-01

Full Text Available Probability holds the first place among the subjects that both teachers and students have difficulty in handling. Although probability has an important role in many professions and a great many decisions we make for our daily lives, the understanding of the probability concepts is not an easy ability to gain for many students. Most of the students develop perception about lots of probability concepts and they have difficulty finding a reason for probability events. Thus, in the present study, the difficulties faced while learning probability concepts and the reasons why these concepts cannot be learned well are investigated, these reasons are tried to be put forward, and some suggestions for solutions regarding these concepts are presented. In this study, cross-hatching model was used. National and international studies on the subject of probability are investigated, the reasons why these concepts cannot be learned were categorized in the light of findings obtained, and the reasons why these concepts cannot be learned and taught are tried to be discovered. The categorization was displayed with Ishikawa diagram. In the diagram, the reasons why these concepts cannot be learned were noted as six categories. These categories were age, the insufficiency of advanced information, the deficiency of argumentation ability, teacher, error in concept, and students’ negative attitudes.

9. A Probability-based Evolutionary Algorithm with Mutations to Learn Bayesian Networks

Directory of Open Access Journals (Sweden)

Sho Fukuda

2014-12-01

Full Text Available Bayesian networks are regarded as one of the essential tools to analyze causal relationship between events from data. To learn the structure of highly-reliable Bayesian networks from data as quickly as possible is one of the important problems that several studies have been tried to achieve. In recent years, probability-based evolutionary algorithms have been proposed as a new efficient approach to learn Bayesian networks. In this paper, we target on one of the probability-based evolutionary algorithms called PBIL (Probability-Based Incremental Learning, and propose a new mutation operator. Through performance evaluation, we found that the proposed mutation operator has a good performance in learning Bayesian networks

10. Development of probabilistic thinking-oriented learning tools for probability materials at junior high school students

Science.gov (United States)

Sari, Dwi Ivayana; Hermanto, Didik

2017-08-01

This research is a developmental research of probabilistic thinking-oriented learning tools for probability materials at ninth grade students. This study is aimed to produce a good probabilistic thinking-oriented learning tools. The subjects were IX-A students of MTs Model Bangkalan. The stages of this development research used 4-D development model which has been modified into define, design and develop. Teaching learning tools consist of lesson plan, students' worksheet, learning teaching media and students' achievement test. The research instrument used was a sheet of learning tools validation, a sheet of teachers' activities, a sheet of students' activities, students' response questionnaire and students' achievement test. The result of those instruments were analyzed descriptively to answer research objectives. The result was teaching learning tools in which oriented to probabilistic thinking of probability at ninth grade students which has been valid. Since teaching and learning tools have been revised based on validation, and after experiment in class produced that teachers' ability in managing class was effective, students' activities were good, students' responses to the learning tools were positive and the validity, sensitivity and reliability category toward achievement test. In summary, this teaching learning tools can be used by teacher to teach probability for develop students' probabilistic thinking.

11. Cosmic String Detection with Tree-Based Machine Learning

Science.gov (United States)

Vafaei Sadr, A.; Farhang, M.; Movahed, S. M. S.; Bassett, B.; Kunz, M.

2018-05-01

We explore the use of random forest and gradient boosting, two powerful tree-based machine learning algorithms, for the detection of cosmic strings in maps of the cosmic microwave background (CMB), through their unique Gott-Kaiser-Stebbins effect on the temperature anisotropies. The information in the maps is compressed into feature vectors before being passed to the learning units. The feature vectors contain various statistical measures of the processed CMB maps that boost cosmic string detectability. Our proposed classifiers, after training, give results similar to or better than claimed detectability levels from other methods for string tension, Gμ. They can make 3σ detection of strings with Gμ ≳ 2.1 × 10-10 for noise-free, 0.9΄-resolution CMB observations. The minimum detectable tension increases to Gμ ≳ 3.0 × 10-8 for a more realistic, CMB S4-like (II) strategy, improving over previous results.

12. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

Science.gov (United States)

Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

2014-07-01

Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

13. Incidental learning of probability information is differentially affected by the type of visual working memory representation.

Science.gov (United States)

van Lamsweerde, Amanda E; Beck, Melissa R

2015-12-01

In this study, we investigated whether the ability to learn probability information is affected by the type of representation held in visual working memory. Across 4 experiments, participants detected changes to displays of coloured shapes. While participants detected changes in 1 dimension (e.g., colour), a feature from a second, nonchanging dimension (e.g., shape) predicted which object was most likely to change. In Experiments 1 and 3, items could be grouped by similarity in the changing dimension across items (e.g., colours and shapes were repeated in the display), while in Experiments 2 and 4 items could not be grouped by similarity (all features were unique). Probability information from the predictive dimension was learned and used to increase performance, but only when all of the features within a display were unique (Experiments 2 and 4). When it was possible to group by feature similarity in the changing dimension (e.g., 2 blue objects appeared within an array), participants were unable to learn probability information and use it to improve performance (Experiments 1 and 3). The results suggest that probability information can be learned in a dimension that is not explicitly task-relevant, but only when the probability information is represented with the changing dimension in visual working memory. (c) 2015 APA, all rights reserved).

14. Learning decision trees with flexible constraints and objectives using integer optimization

NARCIS (Netherlands)

Verwer, S.; Zhang, Y.

2017-01-01

We encode the problem of learning the optimal decision tree of a given depth as an integer optimization problem. We show experimentally that our method (DTIP) can be used to learn good trees up to depth 5 from data sets of size up to 1000. In addition to being efficient, our new formulation allows

15. Probability cueing of distractor locations: both intertrial facilitation and statistical learning mediate interference reduction.

Science.gov (United States)

Goschy, Harriet; Bakos, Sarolta; Müller, Hermann J; Zehetleitner, Michael

2014-01-01

Targets in a visual search task are detected faster if they appear in a probable target region as compared to a less probable target region, an effect which has been termed "probability cueing." The present study investigated whether probability cueing cannot only speed up target detection, but also minimize distraction by distractors in probable distractor regions as compared to distractors in less probable distractor regions. To this end, three visual search experiments with a salient, but task-irrelevant, distractor ("additional singleton") were conducted. Experiment 1 demonstrated that observers can utilize uneven spatial distractor distributions to selectively reduce interference by distractors in frequent distractor regions as compared to distractors in rare distractor regions. Experiments 2 and 3 showed that intertrial facilitation, i.e., distractor position repetitions, and statistical learning (independent of distractor position repetitions) both contribute to the probability cueing effect for distractor locations. Taken together, the present results demonstrate that probability cueing of distractor locations has the potential to serve as a strong attentional cue for the shielding of likely distractor locations.

16. Teaching Probability to Pre-Service Teachers with Argumentation Based Science Learning Approach

Science.gov (United States)

Can, Ömer Sinan; Isleyen, Tevfik

2016-01-01

The aim of this study is to explore the effects of the argumentation based science learning (ABSL) approach on the teaching probability to pre-service teachers. The sample of the study included 41 students studying at the Department of Elementary School Mathematics Education in a public university during the 2014-2015 academic years. The study is…

17. The influence of phonotactic probability and neighborhood density on children's production of newly learned words.

Science.gov (United States)

Heisler, Lori; Goffman, Lisa

A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were non-referential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was attached through fast mapping). Two methods of analysis were included: (1) kinematic variability of speech movement patterning; and (2) measures of segmental accuracy. Results showed that phonotactic frequency influenced the stability of movement patterning whereas neighborhood density influenced phoneme accuracy. Motor learning was observed in both non-referential and referential novel words. Forms with low phonotactic probability and low neighborhood density showed a word learning effect when a referent was assigned during fast mapping. These results elaborate on and specify the nature of interactivity observed across lexical, phonological, and articulatory domains.

18. Trees

Science.gov (United States)

Al-Khaja, Nawal

2007-01-01

This is a thematic lesson plan for young learners about palm trees and the importance of taking care of them. The two part lesson teaches listening, reading and speaking skills. The lesson includes parts of a tree; the modal auxiliary, can; dialogues and a role play activity.

19. Web-based experiments controlled by JavaScript: an example from probability learning.

Science.gov (United States)

Birnbaum, Michael H; Wakcher, Sandra V

2002-05-01

JavaScript programs can be used to control Web experiments. This technique is illustrated by an experiment that tested the effects of advice on performance in the classic probability-learning paradigm. Previous research reported that people tested via the Web or in the lab tended to match the probabilities of their responses to the probabilities that those responses would be reinforced. The optimal strategy, however, is to consistently choose the more frequent event; probability matching produces suboptimal performance. We investigated manipulations we reasoned should improve performance. A horse race scenario in which participants predicted the winner in each of a series of races between two horses was compared with an abstract scenario used previously. Ten groups of learners received different amounts of advice, including all combinations of (1) explicit instructions concerning the optimal strategy, (2) explicit instructions concerning a monetary sum to maximize, and (3) accurate information concerning the probabilities of events. The results showed minimal effects of horse race versus abstract scenario. Both advice concerning the optimal strategy and probability information contributed significantly to performance in the task. This paper includes a brief tutorial on JavaScript, explaining with simple examples how to assemble a browser-based experiment.

20. Tree Nut Allergies

Science.gov (United States)

... Blog Vision Awards Common Allergens Tree Nut Allergy Tree Nut Allergy Learn about tree nut allergy, how ... a Tree Nut Label card . Allergic Reactions to Tree Nuts Tree nuts can cause a severe and ...

1. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

Science.gov (United States)

Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

2010-01-01

Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

2. More than words: Adults learn probabilities over categories and relationships between them.

Science.gov (United States)

Hudson Kam, Carla L

2009-04-01

This study examines whether human learners can acquire statistics over abstract categories and their relationships to each other. Adult learners were exposed to miniature artificial languages containing variation in the ordering of the Subject, Object, and Verb constituents. Different orders (e.g. SOV, VSO) occurred in the input with different frequencies, but the occurrence of one order versus another was not predictable. Importantly, the language was constructed such that participants could only match the overall input probabilities if they were tracking statistics over abstract categories, not over individual words. At test, participants reproduced the probabilities present in the input with a high degree of accuracy. Closer examination revealed that learner's were matching the probabilities associated with individual verbs rather than the category as a whole. However, individual nouns had no impact on word orders produced. Thus, participants learned the probabilities of a particular ordering of the abstract grammatical categories Subject and Object associated with each verb. Results suggest that statistical learning mechanisms are capable of tracking relationships between abstract linguistic categories in addition to individual items.

3. Adapting and Evaluating a Tree of Life Group for Women with Learning Disabilities

Science.gov (United States)

Randle-Phillips, Cathy; Farquhar, Sarah; Thomas, Sally

2016-01-01

Background: This study describes how a specific narrative therapy approach called 'the tree of life' was adapted to run a group for women with learning disabilities. The group consisted of four participants and ran for five consecutive weeks. Materials and Methods: Participants each constructed a tree to represent their lives and presented their…

4. Visualizing Biological Data in Museums: Visitor Learning with an Interactive Tree of Life Exhibit

Science.gov (United States)

Horn, Michael S.; Phillips, Brenda C.; Evans, Evelyn Margaret; Block, Florian; Diamond, Judy; Shen, Chia

2016-01-01

In this study, we investigate museum visitor learning and engagement at an interactive visualization of an evolutionary tree of life consisting of over 70,000 species. The study was conducted at two natural history museums where visitors collaboratively explored the tree of life using direct touch gestures on a multi-touch tabletop display. In the…

5. Detecting Structural Metadata with Decision Trees and Transformation-Based Learning

National Research Council Canada - National Science Library

Kim, Joungbum; Schwarm, Sarah E; Ostendorf, Mari

2004-01-01

.... Specifically, combinations of decision trees and language models are used to predict sentence ends and interruption points and given these events transformation based learning is used to detect edit...

6. External validity of individual differences in multiple cue probability learning: The case of pilot training

Directory of Open Access Journals (Sweden)

Nadine Matton

2013-09-01

Full Text Available Individuals differ in their ability to deal with unpredictable environments. Could impaired performances on learning an unpredictable cue-criteria relationship in a laboratory task be associated with impaired learning of complex skills in a natural setting? We focused on a multiple-cue probability learning (MCPL laboratory task and on the natural setting of pilot training. We used data from three selection sessions and from the three corresponding selected pilot student classes of a national airline pilot selection and training system. First, applicants took an MCPL task at the selection stage (N=556; N=701; N=412. Then, pilot trainees selected from the applicant pools (N=44; N=60; N=28 followed the training for 2.5 to 3 yrs. Differences in final MCPL performance were associated with pilot training difficulties. Indeed, poor MCPL performers experienced almost twice as many pilot training difficulties as better MCPL performers (44.0% and 25.0%, respectively.

7. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

Energy Technology Data Exchange (ETDEWEB)

Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

2012-03-15

Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

8. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

Science.gov (United States)

Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

2012-03-15

To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright Â© 2012 Elsevier Inc. All rights reserved.

9. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

International Nuclear Information System (INIS)

Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

2012-01-01

Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

10. Deduction of probable events of lateral gene transfer through comparison of phylogenetic trees by recursive consolidation and rearrangement

Directory of Open Access Journals (Sweden)

Charlebois Robert L

2005-04-01

Full Text Available Abstract Background When organismal phylogenies based on sequences of single marker genes are poorly resolved, a logical approach is to add more markers, on the assumption that weak but congruent phylogenetic signal will be reinforced in such multigene trees. Such approaches are valid only when the several markers indeed have identical phylogenies, an issue which many multigene methods (such as the use of concatenated gene sequences or the assembly of supertrees do not directly address. Indeed, even when the true history is a mixture of vertical descent for some genes and lateral gene transfer (LGT for others, such methods produce unique topologies. Results We have developed software that aims to extract evidence for vertical and lateral inheritance from a set of gene trees compared against an arbitrary reference tree. This evidence is then displayed as a synthesis showing support over the tree for vertical inheritance, overlaid with explicit lateral gene transfer (LGT events inferred to have occurred over the history of the tree. Like splits-tree methods, one can thus identify nodes at which conflict occurs. Additionally one can make reasonable inferences about vertical and lateral signal, assigning putative donors and recipients. Conclusion A tool such as ours can serve to explore the reticulated dimensionality of molecular evolution, by dissecting vertical and lateral inheritance at high resolution. By this, we mean that individual nodes can be examined not only for congruence, but also for coherence in light of LGT. We assert that our tools will facilitate the comparison of phylogenetic trees, and the interpretation of conflicting data.

11. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.

Science.gov (United States)

Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling

2015-11-01

In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.

12. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

Science.gov (United States)

Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

2013-01-01

Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

13. Modeling flash floods in ungauged mountain catchments of China: A decision tree learning approach for parameter regionalization

Science.gov (United States)

Ragettli, S.; Zhou, J.; Wang, H.; Liu, C.

2017-12-01

Flash floods in small mountain catchments are one of the most frequent causes of loss of life and property from natural hazards in China. Hydrological models can be a useful tool for the anticipation of these events and the issuing of timely warnings. Since sub-daily streamflow information is unavailable for most small basins in China, one of the main challenges is finding appropriate parameter values for simulating flash floods in ungauged catchments. In this study, we use decision tree learning to explore parameter set transferability between different catchments. For this purpose, the physically-based, semi-distributed rainfall-runoff model PRMS-OMS is set up for 35 catchments in ten Chinese provinces. Hourly data from more than 800 storm runoff events are used to calibrate the model and evaluate the performance of parameter set transfers between catchments. For each catchment, 58 catchment attributes are extracted from several data sets available for whole China. We then use a data mining technique (decision tree learning) to identify catchment similarities that can be related to good transfer performance. Finally, we use the splitting rules of decision trees for finding suitable donor catchments for ungauged target catchments. We show that decision tree learning allows to optimally utilize the information content of available catchment descriptors and outperforms regionalization based on a conventional measure of physiographic-climatic similarity by 15%-20%. Similar performance can be achieved with a regionalization method based on spatial proximity, but decision trees offer flexible rules for selecting suitable donor catchments, not relying on the vicinity of gauged catchments. This flexibility makes the method particularly suitable for implementation in sparsely gauged environments. We evaluate the probability to detect flood events exceeding a given return period, considering measured discharge and PRMS-OMS simulated flows with regionalized parameters

14. Mutual learning in a tree parity machine and its application to cryptography

International Nuclear Information System (INIS)

Rosen-Zvi, Michal; Klein, Einat; Kanter, Ido; Kinzel, Wolfgang

2002-01-01

Mutual learning of a pair of tree parity machines with continuous and discrete weight vectors is studied analytically. The analysis is based on a mapping procedure that maps the mutual learning in tree parity machines onto mutual learning in noisy perceptrons. The stationary solution of the mutual learning in the case of continuous tree parity machines depends on the learning rate where a phase transition from partial to full synchronization is observed. In the discrete case the learning process is based on a finite increment and a full synchronized state is achieved in a finite number of steps. The synchronization of discrete parity machines is introduced in order to construct an ephemeral key-exchange protocol. The dynamic learning of a third tree parity machine (an attacker) that tries to imitate one of the two machines while the two still update their weight vectors is also analyzed. In particular, the synchronization times of the naive attacker and the flipping attacker recently introduced in Ref. 9 are analyzed. All analytical results are found to be in good agreement with simulation results

15. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

Science.gov (United States)

Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

2013-01-01

Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

16. Use of fault tree technique to determine the failure probability of electrical systems of IE class in nuclear installations

International Nuclear Information System (INIS)

Cruz S, W.D.

1988-01-01

This paper refers to emergency safety systems of Angra INPP (Brazil 1626 Mw(e)) such as containment, heat removal, emergency removal system, radioactive elements removal from containment environment, berated water infection, etc. Associated with these systems, the failure probability calculation of IE Class bars is achieved, this is a safety classification for electrical equipment essential for the systems mentioned above

17. Probability weighted ensemble transfer learning for predicting interactions between HIV-1 and human proteins.

Directory of Open Access Journals (Sweden)

Suyu Mei

Full Text Available Reconstruction of host-pathogen protein interaction networks is of great significance to reveal the underlying microbic pathogenesis. However, the current experimentally-derived networks are generally small and should be augmented by computational methods for less-biased biological inference. From the point of view of computational modelling, data scarcity, data unavailability and negative data sampling are the three major problems for host-pathogen protein interaction networks reconstruction. In this work, we are motivated to address the three concerns and propose a probability weighted ensemble transfer learning model for HIV-human protein interaction prediction (PWEN-TLM, where support vector machine (SVM is adopted as the individual classifier of the ensemble model. In the model, data scarcity and data unavailability are tackled by homolog knowledge transfer. The importance of homolog knowledge is measured by the ROC-AUC metric of the individual classifiers, whose outputs are probability weighted to yield the final decision. In addition, we further validate the assumption that only the homolog knowledge is sufficient to train a satisfactory model for host-pathogen protein interaction prediction. Thus the model is more robust against data unavailability with less demanding data constraint. As regards with negative data construction, experiments show that exclusiveness of subcellular co-localized proteins is unbiased and more reliable than random sampling. Last, we conduct analysis of overlapped predictions between our model and the existing models, and apply the model to novel host-pathogen PPIs recognition for further biological research.

18. Improved Membership Probability for Moving Groups: Bayesian and Machine Learning Approaches

Science.gov (United States)

Lee, Jinhee; Song, Inseok

2018-01-01

Gravitationally unbound loose stellar associations (i.e., young nearby moving groups: moving groups hereafter) have been intensively explored because they are important in planet and disk formation studies, exoplanet imaging, and age calibration. Among the many efforts devoted to the search for moving group members, a Bayesian approach (e.g.,using the code BANYAN) has become popular recently because of the many advantages it offers. However, the resultant membership probability needs to be carefully adopted because of its sensitive dependence on input models. In this study, we have developed an improved membership calculation tool focusing on the beta-Pic moving group. We made three improvements for building models used in BANYAN II: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZUVW. Our improved tool can change membership probability up to 70%. Membership probability is critical and must be better defined. For example, our code identifies only one third of the candidate members in SIMBAD that are believed to be kinematically associated with beta-Pic moving group.Additionally, we performed cluster analysis of young nearby stars using an unsupervised machine learning approach. As more moving groups and their members are identified, the complexity and ambiguity in moving group configuration has been increased. To clarify this issue, we analyzed ~4,000 X-ray bright young stellar candidates. Here, we present the preliminary results. By re-identifying moving groups with the least human intervention, we expect to understand the composition of the solar neighborhood. Moreover better defined moving group membership will help us understand star formation and evolution in relatively low density environments; especially for the low-mass stars which will be identified in the coming Gaia release.

19. A scenario tree model for the Canadian Notifiable Avian Influenza Surveillance System and its application to estimation of probability of freedom and sample size determination.

Science.gov (United States)

Christensen, Jette; Stryhn, Henrik; Vallières, André; El Allaki, Farouk

2011-05-01

In 2008, Canada designed and implemented the Canadian Notifiable Avian Influenza Surveillance System (CanNAISS) with six surveillance activities in a phased-in approach. CanNAISS was a surveillance system because it had more than one surveillance activity or component in 2008: passive surveillance; pre-slaughter surveillance; and voluntary enhanced notifiable avian influenza surveillance. Our objectives were to give a short overview of two active surveillance components in CanNAISS; describe the CanNAISS scenario tree model and its application to estimation of probability of populations being free of NAI virus infection and sample size determination. Our data from the pre-slaughter surveillance component included diagnostic test results from 6296 serum samples representing 601 commercial chicken and turkey farms collected from 25 August 2008 to 29 January 2009. In addition, we included data from a sub-population of farms with high biosecurity standards: 36,164 samples from 55 farms sampled repeatedly over the 24 months study period from January 2007 to December 2008. All submissions were negative for Notifiable Avian Influenza (NAI) virus infection. We developed the CanNAISS scenario tree model, so that it will estimate the surveillance component sensitivity and the probability of a population being free of NAI at the 0.01 farm-level and 0.3 within-farm-level prevalences. We propose that a general model, such as the CanNAISS scenario tree model, may have a broader application than more detailed models that require disease specific input parameters, such as relative risk estimates. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

20. Imitation learning of car driving skills with decision trees and random forests

Directory of Open Access Journals (Sweden)

Cichosz Paweł

2014-09-01

Full Text Available Machine learning is an appealing and useful approach to creating vehicle control algorithms, both for simulated and real vehicles. One common learning scenario that is often possible to apply is learning by imitation, in which the behavior of an exemplary driver provides training instances for a supervised learning algorithm. This article follows this approach in the domain of simulated car racing, using the TORCS simulator. In contrast to most prior work on imitation learning, a symbolic decision tree knowledge representation is adopted, which combines potentially high accuracy with human readability, an advantage that can be important in many applications. Decision trees are demonstrated to be capable of representing high quality control models, reaching the performance level of sophisticated pre-designed algorithms. This is achieved by enhancing the basic imitation learning scenario to include active retraining, automatically triggered on control failures. It is also demonstrated how better stability and generalization can be achieved by sacrificing human-readability and using decision tree model ensembles. The methodology for learning control models contributed by this article can be hopefully applied to solve real-world control tasks, as well as to develop video game bots

1. Automated Sleep Stage Scoring by Decision Tree Learning

National Research Council Canada - National Science Library

Hanaoka, Masaaki

2001-01-01

... practice regarded as one of the most successful machine learning methods. In our method, first characteristics of EEG, EOG and EMG are compared with characteristic features of alpha waves, delta waves, sleep spindles, K-complexes and REMs...

2. Design and Selection of Machine Learning Methods Using Radiomics and Dosiomics for Normal Tissue Complication Probability Modeling of Xerostomia.

Science.gov (United States)

Gabryś, Hubert S; Buettner, Florian; Sterzing, Florian; Hauswald, Henrik; Bangert, Mark

2018-01-01

The purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP) models based on the mean radiation dose to parotid glands. A cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0-6 months (early), 6-15 months (late), 15-24 months (long-term), and at any time (a longitudinal model) after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC) of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis. NTCP models based on the parotid mean dose failed to predict xerostomia (AUCs  0.85), dose gradients in the right-left (AUCs > 0.78), and the anterior-posterior (AUCs > 0.72) direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose-volume histogram (DVH) spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing methods. We demonstrated that incorporation of organ- and dose-shape descriptors is beneficial for xerostomia prediction in highly conformal radiotherapy treatments. Due to strong reliance on patient-specific, dose-independent factors, our results underscore the need for development of personalized data-driven risk profiles for NTCP models of xerostomia. The facilitated

3. Design and Selection of Machine Learning Methods Using Radiomics and Dosiomics for Normal Tissue Complication Probability Modeling of Xerostomia

Directory of Open Access Journals (Sweden)

Hubert S. Gabryś

2018-03-01

Full Text Available PurposeThe purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP models based on the mean radiation dose to parotid glands.Material and methodsA cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0–6 months (early, 6–15 months (late, 15–24 months (long-term, and at any time (a longitudinal model after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis.ResultsNTCP models based on the parotid mean dose failed to predict xerostomia (AUCs < 0.60. The most informative predictors were found for late and long-term xerostomia. Late xerostomia correlated with the contralateral dose gradient in the anterior–posterior (AUC = 0.72 and the right–left (AUC = 0.68 direction, whereas long-term xerostomia was associated with parotid volumes (AUCs > 0.85, dose gradients in the right–left (AUCs > 0.78, and the anterior–posterior (AUCs > 0.72 direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose–volume histogram (DVH spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing

4. Modeling flash floods in ungauged mountain catchments of China: A decision tree learning approach for parameter regionalization

Science.gov (United States)

Ragettli, S.; Zhou, J.; Wang, H.; Liu, C.; Guo, L.

2017-12-01

Flash floods in small mountain catchments are one of the most frequent causes of loss of life and property from natural hazards in China. Hydrological models can be a useful tool for the anticipation of these events and the issuing of timely warnings. One of the main challenges of setting up such a system is finding appropriate model parameter values for ungauged catchments. Previous studies have shown that the transfer of parameter sets from hydrologically similar gauged catchments is one of the best performing regionalization methods. However, a remaining key issue is the identification of suitable descriptors of similarity. In this study, we use decision tree learning to explore parameter set transferability in the full space of catchment descriptors. For this purpose, a semi-distributed rainfall-runoff model is set up for 35 catchments in ten Chinese provinces. Hourly runoff data from in total 858 storm events are used to calibrate the model and to evaluate the performance of parameter set transfers between catchments. We then present a novel technique that uses the splitting rules of classification and regression trees (CART) for finding suitable donor catchments for ungauged target catchments. The ability of the model to detect flood events in assumed ungauged catchments is evaluated in series of leave-one-out tests. We show that CART analysis increases the probability of detection of 10-year flood events in comparison to a conventional measure of physiographic-climatic similarity by up to 20%. Decision tree learning can outperform other regionalization approaches because it generates rules that optimally consider spatial proximity and physical similarity. Spatial proximity can be used as a selection criteria but is skipped in the case where no similar gauged catchments are in the vicinity. We conclude that the CART regionalization concept is particularly suitable for implementation in sparsely gauged and topographically complex environments where a proximity

5. Activity in inferior parietal and medial prefrontal cortex signals the accumulation of evidence in a probability learning task.

Directory of Open Access Journals (Sweden)

Mathieu d'Acremont

Full Text Available In an uncertain environment, probabilities are key to predicting future events and making adaptive choices. However, little is known about how humans learn such probabilities and where and how they are encoded in the brain, especially when they concern more than two outcomes. During functional magnetic resonance imaging (fMRI, young adults learned the probabilities of uncertain stimuli through repetitive sampling. Stimuli represented payoffs and participants had to predict their occurrence to maximize their earnings. Choices indicated loss and risk aversion but unbiased estimation of probabilities. BOLD response in medial prefrontal cortex and angular gyri increased linearly with the probability of the currently observed stimulus, untainted by its value. Connectivity analyses during rest and task revealed that these regions belonged to the default mode network. The activation of past outcomes in memory is evoked as a possible mechanism to explain the engagement of the default mode network in probability learning. A BOLD response relating to value was detected only at decision time, mainly in striatum. It is concluded that activity in inferior parietal and medial prefrontal cortex reflects the amount of evidence accumulated in favor of competing and uncertain outcomes.

6. What subject matter questions motivate the use of machine learning approaches compared to statistical models for probability prediction?

Science.gov (United States)

Binder, Harald

2014-07-01

This is a discussion of the following papers: "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory" by Jochen Kruppa, Yufeng Liu, Gérard Biau, Michael Kohler, Inke R. König, James D. Malley, and Andreas Ziegler; and "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications" by Jochen Kruppa, Yufeng Liu, Hans-Christian Diener, Theresa Holste, Christian Weimar, Inke R. König, and Andreas Ziegler. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

7. ANALYSIS OF EFFECTIVENESS OF METHODOLOGICAL SYSTEM FOR PROBABILITY AND STOCHASTIC PROCESSES COMPUTER-BASED LEARNING FOR PRE-SERVICE ENGINEERS

Directory of Open Access Journals (Sweden)

E. Chumak

2015-04-01

Full Text Available The author substantiates that only methodological training systems of mathematical disciplines with implementation of information and communication technologies (ICT can meet the requirements of modern educational paradigm and make possible to increase the educational efficiency. Due to this fact, the necessity of developing the methodology of theory of probability and stochastic processes computer-based learning for pre-service engineers is underlined in the paper. The results of the experimental study for analysis of the efficiency of methodological system of theory of probability and stochastic processes computer-based learning for pre-service engineers are shown. The analysis includes three main stages: ascertaining, searching and forming. The key criteria of the efficiency of designed methodological system are the level of probabilistic and stochastic skills of students and their learning motivation. The effect of implementing the methodological system of probability theory and stochastic processes computer-based learning on the level of students’ IT literacy is shown in the paper. The expanding of the range of objectives of ICT applying by students is described by author. The level of formation of students’ learning motivation on the ascertaining and forming stages of the experiment is analyzed. The level of intrinsic learning motivation for pre-service engineers is defined on these stages of the experiment. For this purpose, the methodology of testing the students’ learning motivation in the chosen specialty is presented in the paper. The increasing of intrinsic learning motivation of the experimental group students (E group against the control group students (C group is demonstrated.

8. A Decision-Tree-Oriented Guidance Mechanism for Conducting Nature Science Observation Activities in a Context-Aware Ubiquitous Learning

Science.gov (United States)

Hwang, Gwo-Jen; Chu, Hui-Chun; Shih, Ju-Ling; Huang, Shu-Hsien; Tsai, Chin-Chung

2010-01-01

A context-aware ubiquitous learning environment is an authentic learning environment with personalized digital supports. While showing the potential of applying such a learning environment, researchers have also indicated the challenges of providing adaptive and dynamic support to individual students. In this paper, a decision-tree-oriented…

9. Learning Binomial Probability Concepts with Simulation, Random Numbers and a Spreadsheet

Science.gov (United States)

Rochowicz, John A., Jr.

2005-01-01

This paper introduces the reader to the concepts of binomial probability and simulation. A spreadsheet is used to illustrate these concepts. Random number generators are great technological tools for demonstrating the concepts of probability. Ideas of approximation, estimation, and mathematical usefulness provide numerous ways of learning…

10. Unification of field theory and maximum entropy methods for learning probability densities

OpenAIRE

Kinney, Justin B.

2014-01-01

The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

11. Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting.

Science.gov (United States)

Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J

2016-06-01

Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.

12. Value and probability coding in a feedback-based learning task utilizing food rewards.

Science.gov (United States)

Tricomi, Elizabeth; Lempert, Karolina M

2015-01-01

For the consequences of our actions to guide behavior, the brain must represent different types of outcome-related information. For example, an outcome can be construed as negative because an expected reward was not delivered or because an outcome of low value was delivered. Thus behavioral consequences can differ in terms of the information they provide about outcome probability and value. We investigated the role of the striatum in processing probability-based and value-based negative feedback by training participants to associate cues with food rewards and then employing a selective satiety procedure to devalue one food outcome. Using functional magnetic resonance imaging, we examined brain activity related to receipt of expected rewards, receipt of devalued outcomes, omission of expected rewards, omission of devalued outcomes, and expected omissions of an outcome. Nucleus accumbens activation was greater for rewarding outcomes than devalued outcomes, but activity in this region did not correlate with the probability of reward receipt. Activation of the right caudate and putamen, however, was largest in response to rewarding outcomes relative to expected omissions of reward. The dorsal striatum (caudate and putamen) at the time of feedback also showed a parametric increase correlating with the trialwise probability of reward receipt. Our results suggest that the ventral striatum is sensitive to the motivational relevance, or subjective value, of the outcome, while the dorsal striatum codes for a more complex signal that incorporates reward probability. Value and probability information may be integrated in the dorsal striatum, to facilitate action planning and allocation of effort. Copyright © 2015 the American Physiological Society.

13. Supervised Learning of Two-Layer Perceptron under the Existence of External Noise — Learning Curve of Boolean Functions of Two Variables in Tree-Like Architecture —

Science.gov (United States)

Uezu, Tatsuya; Kiyokawa, Shuji

2016-06-01

We investigate the supervised batch learning of Boolean functions expressed by a two-layer perceptron with a tree-like structure. We adopt continuous weights (spherical model) and the Gibbs algorithm. We study the Parity and And machines and two types of noise, input and output noise, together with the noiseless case. We assume that only the teacher suffers from noise. By using the replica method, we derive the saddle point equations for order parameters under the replica symmetric (RS) ansatz. We study the critical value αC of the loading rate α above which the learning phase exists for cases with and without noise. We find that αC is nonzero for the Parity machine, while it is zero for the And machine. We derive the exponents barβ of order parameters expressed as (α - α C)bar{β} when α is near to αC. Furthermore, in the Parity machine, when noise exists, we find a spin glass solution, in which the overlap between the teacher and student vectors is zero but that between student vectors is nonzero. We perform Markov chain Monte Carlo simulations by simulated annealing and also by exchange Monte Carlo simulations in both machines. In the Parity machine, we study the de Almeida-Thouless stability, and by comparing theoretical and numerical results, we find that there exist parameter regions where the RS solution is unstable, and that the spin glass solution is metastable or unstable. We also study asymptotic learning behavior for large α and derive the exponents hat{β } of order parameters expressed as α - hat{β } when α is large in both machines. By simulated annealing simulations, we confirm these results and conclude that learning takes place for the input noise case with any noise amplitude and for the output noise case when the probability that the teacher's output is reversed is less than one-half.

14. Promoting Active Learning When Teaching Introductory Statistics and Probability Using a Portfolio Curriculum Approach

Science.gov (United States)

Adair, Desmond; Jaeger, Martin; Price, Owen M.

2018-01-01

The use of a portfolio curriculum approach, when teaching a university introductory statistics and probability course to engineering students, is developed and evaluated. The portfolio curriculum approach, so called, as the students need to keep extensive records both as hard copies and digitally of reading materials, interactions with faculty,…

15. Active learning strategies for the deduplication of electronic patient data using classification trees.

Science.gov (United States)

Sariyar, M; Borg, A; Pommerening, K

2012-10-01

Supervised record linkage methods often require a clerical review to gain informative training data. Active learning means to actively prompt the user to label data with special characteristics in order to minimise the review costs. We conducted an empirical evaluation to investigate whether a simple active learning strategy using binary comparison patterns is sufficient or if string metrics together with a more sophisticated algorithm are necessary to achieve high accuracies with a small training set. Based on medical registry data with different numbers of attributes, we used active learning to acquire training sets for classification trees, which were then used to classify the remaining data. Active learning for binary patterns means that every distinct comparison pattern represents a stratum from which one item is sampled. Active learning for patterns consisting of the Levenshtein string metric values uses an iterative process where the most informative and representative examples are added to the training set. In this context, we extended the active learning strategy by Sarawagi and Bhamidipaty (2002). On the original data set, active learning based on binary comparison patterns leads to the best results. When dropping four or six attributes, using string metrics leads to better results. In both cases, not more than 200 manually reviewed training examples are necessary. In record linkage applications where only forename, name and birthday are available as attributes, we suggest the sophisticated active learning strategy based on string metrics in order to achieve highly accurate results. We recommend the simple strategy if more attributes are available, as in our study. In both cases, active learning significantly reduces the amount of manual involvement in training data selection compared to usual record linkage settings. Copyright © 2012 Elsevier Inc. All rights reserved.

16. Decision tree-based learning to predict patient controlled analgesia consumption and readjustment

Directory of Open Access Journals (Sweden)

Hu Yuh-Jyh

2012-11-01

Full Text Available Abstract Background Appropriate postoperative pain management contributes to earlier mobilization, shorter hospitalization, and reduced cost. The under treatment of pain may impede short-term recovery and have a detrimental long-term effect on health. This study focuses on Patient Controlled Analgesia (PCA, which is a delivery system for pain medication. This study proposes and demonstrates how to use machine learning and data mining techniques to predict analgesic requirements and PCA readjustment. Methods The sample in this study included 1099 patients. Every patient was described by 280 attributes, including the class attribute. In addition to commonly studied demographic and physiological factors, this study emphasizes attributes related to PCA. We used decision tree-based learning algorithms to predict analgesic consumption and PCA control readjustment based on the first few hours of PCA medications. We also developed a nearest neighbor-based data cleaning method to alleviate the class-imbalance problem in PCA setting readjustment prediction. Results The prediction accuracies of total analgesic consumption (continuous dose and PCA dose and PCA analgesic requirement (PCA dose only by an ensemble of decision trees were 80.9% and 73.1%, respectively. Decision tree-based learning outperformed Artificial Neural Network, Support Vector Machine, Random Forest, Rotation Forest, and Naïve Bayesian classifiers in analgesic consumption prediction. The proposed data cleaning method improved the performance of every learning method in this study of PCA setting readjustment prediction. Comparative analysis identified the informative attributes from the data mining models and compared them with the correlates of analgesic requirement reported in previous works. Conclusion This study presents a real-world application of data mining to anesthesiology. Unlike previous research, this study considers a wider variety of predictive factors, including PCA

17. A machine learning approach to galaxy-LSS classification - I. Imprints on halo merger trees

Science.gov (United States)

Hui, Jianan; Aragon, Miguel; Cui, Xinping; Flegal, James M.

2018-04-01

The cosmic web plays a major role in the formation and evolution of galaxies and defines, to a large extent, their properties. However, the relation between galaxies and environment is still not well understood. Here, we present a machine learning approach to study imprints of environmental effects on the mass assembly of haloes. We present a galaxy-LSS machine learning classifier based on galaxy properties sensitive to the environment. We then use the classifier to assess the relevance of each property. Correlations between galaxy properties and their cosmic environment can be used to predict galaxy membership to void/wall or filament/cluster with an accuracy of 93 per cent. Our study unveils environmental information encoded in properties of haloes not normally considered directly dependent on the cosmic environment such as merger history and complexity. Understanding the physical mechanism by which the cosmic web is imprinted in a halo can lead to significant improvements in galaxy formation models. This is accomplished by extracting features from galaxy properties and merger trees, computing feature scores for each feature and then applying support vector machine (SVM) to different feature sets. To this end, we have discovered that the shape and depth of the merger tree, formation time, and density of the galaxy are strongly associated with the cosmic environment. We describe a significant improvement in the original classification algorithm by performing LU decomposition of the distance matrix computed by the feature vectors and then using the output of the decomposition as input vectors for SVM.

18. Probability tales

CERN Document Server

Grinstead, Charles M; Snell, J Laurie

2011-01-01

This book explores four real-world topics through the lens of probability theory. It can be used to supplement a standard text in probability or statistics. Most elementary textbooks present the basic theory and then illustrate the ideas with some neatly packaged examples. Here the authors assume that the reader has seen, or is learning, the basic theory from another book and concentrate in some depth on the following topics: streaks, the stock market, lotteries, and fingerprints. This extended format allows the authors to present multiple approaches to problems and to pursue promising side discussions in ways that would not be possible in a book constrained to cover a fixed set of topics. To keep the main narrative accessible, the authors have placed the more technical mathematical details in appendices. The appendices can be understood by someone who has taken one or two semesters of calculus.

19. Unification of field theory and maximum entropy methods for learning probability densities

Science.gov (United States)

Kinney, Justin B.

2015-09-01

The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

20. Unification of field theory and maximum entropy methods for learning probability densities.

Science.gov (United States)

Kinney, Justin B

2015-09-01

The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

1. Talking Trees

Science.gov (United States)

Tolman, Marvin

2005-01-01

Students love outdoor activities and will love them even more when they build confidence in their tree identification and measurement skills. Through these activities, students will learn to identify the major characteristics of trees and discover how the pace--a nonstandard measuring unit--can be used to estimate not only distances but also the…

2. The effect of the fragmentation problem in decision tree learning applied to the search for single top quark production

International Nuclear Information System (INIS)

Vilalta, R; Ocegueda-Hernandez, F; Valerio, R; Watts, G

2010-01-01

Decision tree learning constitutes a suitable approach to classification due to its ability to partition the variable space into regions of class-uniform events, while providing a structure amenable to interpretation, in contrast to other methods such as neural networks. But an inherent limitation of decision tree learning is the progressive lessening of the statistical support of the final classifier as clusters of single-class events are split on every partition, a problem known as the fragmentation problem. We describe a software system called DTFE, for Decision Tree Fragmentation Evaluator, that measures the degree of fragmentation caused by a decision tree learner on every event cluster. Clusters are found through a decomposition of the data using a technique known as Spectral Clustering. Each cluster is analyzed in terms of the number and type of partitions induced by the decision tree. Our domain of application lies on the search for single top quark production, a challenging problem due to large and similar backgrounds, low energetic signals, and low number of jets. The output of the machine-learning software tool consists of a series of statistics describing the degree of data fragmentation.

3. The Hybrid of Classification Tree and Extreme Learning Machine for Permeability Prediction in Oil Reservoir

KAUST Repository

Prasetyo Utomo, Chandra

2011-06-01

Permeability is an important parameter connected with oil reservoir. Predicting the permeability could save millions of dollars. Unfortunately, petroleum engineers have faced numerous challenges arriving at cost-efficient predictions. Much work has been carried out to solve this problem. The main challenge is to handle the high range of permeability in each reservoir. For about a hundred year, mathematicians and engineers have tried to deliver best prediction models. However, none of them have produced satisfying results. In the last two decades, artificial intelligence models have been used. The current best prediction model in permeability prediction is extreme learning machine (ELM). It produces fairly good results but a clear explanation of the model is hard to come by because it is so complex. The aim of this research is to propose a way out of this complexity through the design of a hybrid intelligent model. In this proposal, the system combines classification and regression models to predict the permeability value. These are based on the well logs data. In order to handle the high range of the permeability value, a classification tree is utilized. A benefit of this innovation is that the tree represents knowledge in a clear and succinct fashion and thereby avoids the complexity of all previous models. Finally, it is important to note that the ELM is used as a final predictor. Results demonstrate that this proposed hybrid model performs better when compared with support vector machines (SVM) and ELM in term of correlation coefficient. Moreover, the classification tree model potentially leads to better communication among petroleum engineers concerning this important process and has wider implications for oil reservoir management efficiency.

4. Preventing KPI Violations in Business Processes based on Decision Tree Learning and Proactive Runtime Adaptation

Directory of Open Access Journals (Sweden)

Dimka Karastoyanova

2012-01-01

Full Text Available The performance of business processes is measured and monitored in terms of Key Performance Indicators (KPIs. If the monitoring results show that the KPI targets are violated, the underlying reasons have to be identified and the process should be adapted accordingly to address the violations. In this paper we propose an integrated monitoring, prediction and adaptation approach for preventing KPI violations of business process instances. KPIs are monitored continuously while the process is executed. Additionally, based on KPI measurements of historical process instances we use decision tree learning to construct classification models which are then used to predict the KPI value of an instance while it is still running. If a KPI violation is predicted, we identify adaptation requirements and adaptation strategies in order to prevent the violation.

5. Alignment-free genome tree inference by learning group-specific distance metrics.

Science.gov (United States)

Patil, Kaustubh R; McHardy, Alice C

2013-01-01

Understanding the evolutionary relationships between organisms is vital for their in-depth study. Gene-based methods are often used to infer such relationships, which are not without drawbacks. One can now attempt to use genome-scale information, because of the ever increasing number of genomes available. This opportunity also presents a challenge in terms of computational efficiency. Two fundamentally different methods are often employed for sequence comparisons, namely alignment-based and alignment-free methods. Alignment-free methods rely on the genome signature concept and provide a computationally efficient way that is also applicable to nonhomologous sequences. The genome signature contains evolutionary signal as it is more similar for closely related organisms than for distantly related ones. We used genome-scale sequence information to infer taxonomic distances between organisms without additional information such as gene annotations. We propose a method to improve genome tree inference by learning specific distance metrics over the genome signature for groups of organisms with similar phylogenetic, genomic, or ecological properties. Specifically, our method learns a Mahalanobis metric for a set of genomes and a reference taxonomy to guide the learning process. By applying this method to more than a thousand prokaryotic genomes, we showed that, indeed, better distance metrics could be learned for most of the 18 groups of organisms tested here. Once a group-specific metric is available, it can be used to estimate the taxonomic distances for other sequenced organisms from the group. This study also presents a large scale comparison between 10 methods--9 alignment-free and 1 alignment-based.

6. A Study of Students' Learning Styles, Discipline Attitudes and Knowledge Acquisition in Technology-Enhanced Probability and Statistics Education

Science.gov (United States)

Christou, Nicolas; Dinov, Ivo D.

2011-01-01

Many modern technological advances have direct impact on the format, style and efficacy of delivery and consumption of educational content. For example, various novel communication and information technology tools and resources enable efficient, timely, interactive and graphical demonstrations of diverse scientific concepts. In this manuscript, we report on a meta-study of 3 controlled experiments of using the Statistics Online Computational Resources in probability and statistics courses. Web-accessible SOCR applets, demonstrations, simulations and virtual experiments were used in different courses as treatment and compared to matched control classes utilizing traditional pedagogical approaches. Qualitative and quantitative data we collected for all courses included Felder-Silverman-Soloman index of learning styles, background assessment, pre and post surveys of attitude towards the subject, end-point satisfaction survey, and varieties of quiz, laboratory and test scores. Our findings indicate that students' learning styles and attitudes towards a discipline may be important confounds of their final quantitative performance. The observed positive effects of integrating information technology with established pedagogical techniques may be valid across disciplines within the broader spectrum courses in the science education curriculum. The two critical components of improving science education via blended instruction include instructor training, and development of appropriate activities, simulations and interactive resources. PMID:21603097

7. A Study of Students' Learning Styles, Discipline Attitudes and Knowledge Acquisition in Technology-Enhanced Probability and Statistics Education.

Science.gov (United States)

Christou, Nicolas; Dinov, Ivo D

2010-09-01

Many modern technological advances have direct impact on the format, style and efficacy of delivery and consumption of educational content. For example, various novel communication and information technology tools and resources enable efficient, timely, interactive and graphical demonstrations of diverse scientific concepts. In this manuscript, we report on a meta-study of 3 controlled experiments of using the Statistics Online Computational Resources in probability and statistics courses. Web-accessible SOCR applets, demonstrations, simulations and virtual experiments were used in different courses as treatment and compared to matched control classes utilizing traditional pedagogical approaches. Qualitative and quantitative data we collected for all courses included Felder-Silverman-Soloman index of learning styles, background assessment, pre and post surveys of attitude towards the subject, end-point satisfaction survey, and varieties of quiz, laboratory and test scores. Our findings indicate that students' learning styles and attitudes towards a discipline may be important confounds of their final quantitative performance. The observed positive effects of integrating information technology with established pedagogical techniques may be valid across disciplines within the broader spectrum courses in the science education curriculum. The two critical components of improving science education via blended instruction include instructor training, and development of appropriate activities, simulations and interactive resources.

8. Plant MicroRNA Prediction by Supervised Machine Learning Using C5.0 Decision Trees

Directory of Open Access Journals (Sweden)

Philip H. Williams

2012-01-01

Full Text Available MicroRNAs (miRNAs are nonprotein coding RNAs between 20 and 22 nucleotides long that attenuate protein production. Different types of sequence data are being investigated for novel miRNAs, including genomic and transcriptomic sequences. A variety of machine learning methods have successfully predicted miRNA precursors, mature miRNAs, and other nonprotein coding sequences. MirTools, mirDeep2, and miRanalyzer require “read count” to be included with the input sequences, which restricts their use to deep-sequencing data. Our aim was to train a predictor using a cross-section of different species to accurately predict miRNAs outside the training set. We wanted a system that did not require read-count for prediction and could therefore be applied to short sequences extracted from genomic, EST, or RNA-seq sources. A miRNA-predictive decision-tree model has been developed by supervised machine learning. It only requires that the corresponding genome or transcriptome is available within a sequence window that includes the precursor candidate so that the required sequence features can be collected. Some of the most critical features for training the predictor are the miRNA:miRNA∗ duplex energy and the number of mismatches in the duplex. We present a cross-species plant miRNA predictor with 84.08% sensitivity and 98.53% specificity based on rigorous testing by leave-one-out validation.

9. Plant MicroRNA Prediction by Supervised Machine Learning Using C5.0 Decision Trees.

Science.gov (United States)

Williams, Philip H; Eyles, Rod; Weiller, Georg

2012-01-01

MicroRNAs (miRNAs) are nonprotein coding RNAs between 20 and 22 nucleotides long that attenuate protein production. Different types of sequence data are being investigated for novel miRNAs, including genomic and transcriptomic sequences. A variety of machine learning methods have successfully predicted miRNA precursors, mature miRNAs, and other nonprotein coding sequences. MirTools, mirDeep2, and miRanalyzer require "read count" to be included with the input sequences, which restricts their use to deep-sequencing data. Our aim was to train a predictor using a cross-section of different species to accurately predict miRNAs outside the training set. We wanted a system that did not require read-count for prediction and could therefore be applied to short sequences extracted from genomic, EST, or RNA-seq sources. A miRNA-predictive decision-tree model has been developed by supervised machine learning. It only requires that the corresponding genome or transcriptome is available within a sequence window that includes the precursor candidate so that the required sequence features can be collected. Some of the most critical features for training the predictor are the miRNA:miRNA(∗) duplex energy and the number of mismatches in the duplex. We present a cross-species plant miRNA predictor with 84.08% sensitivity and 98.53% specificity based on rigorous testing by leave-one-out validation.

10. Learning to Detect Traffic Incidents from Data Based on Tree Augmented Naive Bayesian Classifiers

Directory of Open Access Journals (Sweden)

Dawei Li

2017-01-01

Full Text Available This study develops a tree augmented naive Bayesian (TAN classifier based incident detection algorithm. Compared with the Bayesian networks based detection algorithms developed in the previous studies, this algorithm has less dependency on experts’ knowledge. The structure of TAN classifier for incident detection is learned from data. The discretization of continuous attributes is processed using an entropy-based method automatically. A simulation dataset on the section of the Ayer Rajah Expressway (AYE in Singapore is used to demonstrate the development of proposed algorithm, including wavelet denoising, normalization, entropy-based discretization, and structure learning. The performance of TAN based algorithm is evaluated compared with the previous developed Bayesian network (BN based and multilayer feed forward (MLF neural networks based algorithms with the same AYE data. The experiment results show that the TAN based algorithms perform better than the BN classifiers and have a similar performance to the MLF based algorithm. However, TAN based algorithm would have wider vista of applications because the theory of TAN classifiers is much less complicated than MLF. It should be found from the experiment that the TAN classifier based algorithm has a significant superiority over the speed of model training and calibration compared with MLF.

11. Learning Dispatching Rules for Scheduling: A Synergistic View Comprising Decision Trees, Tabu Search and Simulation

Directory of Open Access Journals (Sweden)

Atif Shahzad

2016-02-01

Full Text Available A promising approach for an effective shop scheduling that synergizes the benefits of the combinatorial optimization, supervised learning and discrete-event simulation is presented. Though dispatching rules are in widely used by shop scheduling practitioners, only ordinary performance rules are known; hence, dynamic generation of dispatching rules is desired to make them more effective in changing shop conditions. Meta-heuristics are able to perform quite well and carry more knowledge of the problem domain, however at the cost of prohibitive computational effort in real-time. The primary purpose of this research lies in an offline extraction of this domain knowledge using decision trees to generate simple if-then rules that subsequently act as dispatching rules for scheduling in an online manner. We used similarity index to identify parametric and structural similarity in problem instances in order to implicitly support the learning algorithm for effective rule generation and quality index for relative ranking of the dispatching decisions. Maximum lateness is used as the scheduling objective in a job shop scheduling environment.

12. Bayesian and Classical Machine Learning Methods: A Comparison for Tree Species Classification with LiDAR Waveform Signatures

Directory of Open Access Journals (Sweden)

Tan Zhou

2017-12-01

Full Text Available A plethora of information contained in full-waveform (FW Light Detection and Ranging (LiDAR data offers prospects for characterizing vegetation structures. This study aims to investigate the capacity of FW LiDAR data alone for tree species identification through the integration of waveform metrics with machine learning methods and Bayesian inference. Specifically, we first conducted automatic tree segmentation based on the waveform-based canopy height model (CHM using three approaches including TreeVaW, watershed algorithms and the combination of TreeVaW and watershed (TW algorithms. Subsequently, the Random forests (RF and Conditional inference forests (CF models were employed to identify important tree-level waveform metrics derived from three distinct sources, such as raw waveforms, composite waveforms, the waveform-based point cloud and the combined variables from these three sources. Further, we discriminated tree (gray pine, blue oak, interior live oak and shrub species through the RF, CF and Bayesian multinomial logistic regression (BMLR using important waveform metrics identified in this study. Results of the tree segmentation demonstrated that the TW algorithms outperformed other algorithms for delineating individual tree crowns. The CF model overcomes waveform metrics selection bias caused by the RF model which favors correlated metrics and enhances the accuracy of subsequent classification. We also found that composite waveforms are more informative than raw waveforms and waveform-based point cloud for characterizing tree species in our study area. Both classical machine learning methods (the RF and CF and the BMLR generated satisfactory average overall accuracy (74% for the RF, 77% for the CF and 81% for the BMLR and the BMLR slightly outperformed the other two methods. However, these three methods suffered from low individual classification accuracy for the blue oak which is prone to being misclassified as the interior live oak due

13. Psychomotor development and learning difficulties in preschool children with probable attention deficit hyperactivity disorder: An epidemiological study in Navarre and La Rioja.

Science.gov (United States)

Marín-Méndez, J J; Borra-Ruiz, M C; Álvarez-Gómez, M J; Soutullo Esperón, C

2017-10-01

ADHD symptoms begin to appear at preschool age. ADHD may have a significant negative impact on academic performance. In Spain, there are no standardized tools for detecting ADHD at preschool age, nor is there data about the incidence of this disorder. To evaluate developmental factors and learning difficulties associated with probable ADHD and to assess the impact of ADHD in school performance. We conducted a population-based study with a stratified multistage proportional cluster sample design. We found significant differences between probable ADHD and parents' perception of difficulties in expressive language, comprehension, and fine motor skills, as well as in emotions, concentration, behaviour, and relationships. Around 34% of preschool children with probable ADHD showed global learning difficulties, mainly in patients with the inattentive type. According to the multivariate analysis, learning difficulties were significantly associated with both delayed psychomotor development during the first 3 years of life (OR: 5.57) as assessed by parents, and probable ADHD (OR: 2.34) CONCLUSIONS: There is a connection between probable ADHD in preschool children and parents' perception of difficulties in several dimensions of development and learning. Early detection of ADHD at preschool ages is necessary to start prompt and effective clinical and educational interventions. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

14. A Machine Learning Method for Co-Registration and Individual Tree Matching of Forest Inventory and Airborne Laser Scanning Data

Directory of Open Access Journals (Sweden)

Sebastian Lamprecht

2017-05-01

Full Text Available Determining the exact position of a forest inventory plot—and hence the position of the sampled trees—is often hampered by a poor Global Navigation Satellite System (GNSS signal quality beneath the forest canopy. Inaccurate geo-references hamper the performance of models that aim to retrieve useful information from spatially high remote sensing data (e.g., species classification or timber volume estimation. This restriction is even more severe on the level of individual trees. The objective of this study was to develop a post-processing strategy to improve the positional accuracy of GNSS-measured sample-plot centers and to develop a method to automatically match trees within a terrestrial sample plot to aerial detected trees. We propose a new method which uses a random forest classifier to estimate the matching probability of each terrestrial-reference and aerial detected tree pair, which gives the opportunity to assess the reliability of the results. We investigated 133 sample plots of the Third German National Forest Inventory (BWI, 2011–2012 within the German federal state of Rhineland-Palatinate. For training and objective validation, synthetic forest stands have been modeled using the Waldplaner 2.0 software. Our method has achieved an overall accuracy of 82.7% for co-registration and 89.1% for tree matching. With our method, 60% of the investigated plots could be successfully relocated. The probabilities provided by the algorithm are an objective indicator of the reliability of a specific result which could be incorporated into quantitative models to increase the performance of forest attribute estimations.

15. Learning from examples - Generation and evaluation of decision trees for software resource analysis

Science.gov (United States)

Selby, Richard W.; Porter, Adam A.

1988-01-01

A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.

16. Effects of phentermine and pentobarbital on choice processes during multiple probability learning (MPL) and decision processes manipulated by pay-off conditions

NARCIS (Netherlands)

Volkerts, ER; VanLaar, MW; Verbaten, MN; Mulder, G; Maes, RAA

1997-01-01

The primary research question in this investigation concerned whether arousal manipulation by a stimulant (phentermine 20 mg) and a depressant (pentobarbital 100 mg) will oppositely affect choice behaviour in a probability learning task and decision processes manipulated by pay-off. A 3-source

17. The Hybrid of Classification Tree and Extreme Learning Machine for Permeability Prediction in Oil Reservoir

KAUST Repository

Prasetyo Utomo, Chandra

2011-01-01

the permeability value. These are based on the well logs data. In order to handle the high range of the permeability value, a classification tree is utilized. A benefit of this innovation is that the tree represents knowledge in a clear and succinct fashion

18. Assessment of Student Learning Associated with Tree Thinking in an Undergraduate Introductory Organismal Biology Course

Science.gov (United States)

Smith, James J.; Cheruvelil, Kendra Spence; Auvenshine, Stacie

2013-01-01

Phylogenetic trees provide visual representations of ancestor-descendant relationships, a core concept of evolutionary theory. We introduced "tree thinking" into our introductory organismal biology course (freshman/sophomore majors) to help teach organismal diversity within an evolutionary framework. Our instructional strategy consisted…

19. Collaborative multi-agent reinforcement learning based on a novel coordination tree frame with dynamic partition

NARCIS (Netherlands)

Fang, M.; Groen, F.C.A.; Li, H.; Zhang, J.

2014-01-01

In the research of team Markov games, computing the coordinate team dynamically and determining the joint action policy are the main problems. To deal with the first problem, a dynamic team partitioning method is proposed based on a novel coordinate tree frame. We build a coordinate tree with

20. Learning in data-limited multimodal scenarios: Scandent decision forests and tree-based features.

Science.gov (United States)

Hor, Soheil; Moradi, Mehdi

2016-12-01

Incomplete and inconsistent datasets often pose difficulties in multimodal studies. We introduce the concept of scandent decision trees to tackle these difficulties. Scandent trees are decision trees that optimally mimic the partitioning of the data determined by another decision tree, and crucially, use only a subset of the feature set. We show how scandent trees can be used to enhance the performance of decision forests trained on a small number of multimodal samples when we have access to larger datasets with vastly incomplete feature sets. Additionally, we introduce the concept of tree-based feature transforms in the decision forest paradigm. When combined with scandent trees, the tree-based feature transforms enable us to train a classifier on a rich multimodal dataset, and use it to classify samples with only a subset of features of the training data. Using this methodology, we build a model trained on MRI and PET images of the ADNI dataset, and then test it on cases with only MRI data. We show that this is significantly more effective in staging of cognitive impairments compared to a similar decision forest model trained and tested on MRI only, or one that uses other kinds of feature transform applied to the MRI data. Copyright © 2016. Published by Elsevier B.V.

1. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery Using a Probabilistic Learning Framework

Science.gov (United States)

Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna

2015-01-01

Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

2. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery using a Probabilistic Learning Framework

Science.gov (United States)

Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.

2015-12-01

Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

3. AnswerTree – a hyperplace-based game for collaborative mobile learning

OpenAIRE

Moore, Adam; Goulding, James; Brown, Elizabeth; Swan, Jerry

2009-01-01

In this paper we present AnswerTree, a collaborative mobile location-based educational game designed to teach 8-12 year olds about trees and wildlife within the University of Nottingham campus. The activity is designed around collecting virtual cards (similar in nature to the popular Top TrumpsTM games) containing graphics and information about notable trees. Each player begins by collecting one card from a game location, but then he or she can only collect further cards by answering question...

4. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

Science.gov (United States)

Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

2016-07-07

Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

5. Decision-Tree Program

Science.gov (United States)

Buntine, Wray

1994-01-01

IND computer program introduces Bayesian and Markov/maximum-likelihood (MML) methods and more-sophisticated methods of searching in growing trees. Produces more-accurate class-probability estimates important in applications like diagnosis. Provides range of features and styles with convenience for casual user, fine-tuning for advanced user or for those interested in research. Consists of four basic kinds of routines: data-manipulation, tree-generation, tree-testing, and tree-display. Written in C language.

6. Better Diffusion Segmentation in Acute Ischemic Stroke Through Automatic Tree Learning Anomaly Segmentation

Directory of Open Access Journals (Sweden)

Jens K. Boldsen

2018-04-01

Full Text Available Stroke is the second most common cause of death worldwide, responsible for 6.24 million deaths in 2015 (about 11% of all deaths. Three out of four stroke survivors suffer long term disability, as many cannot return to their prior employment or live independently. Eighty-seven percent of strokes are ischemic. As an increasing volume of ischemic brain tissue proceeds to permanent infarction in the hours following the onset, immediate treatment is pivotal to increase the likelihood of good clinical outcome for the patient. Triaging stroke patients for active therapy requires assessment of the volume of salvageable and irreversible damaged tissue, respectively. With Magnetic Resonance Imaging (MRI, diffusion-weighted imaging is commonly used to assess the extent of permanently damaged tissue, the core lesion. To speed up and standardize decision-making in acute stroke management we present a fully automated algorithm, ATLAS, for delineating the core lesion. We compare performance to widely used threshold based methodology, as well as a recently proposed state-of-the-art algorithm: COMBAT Stroke. ATLAS is a machine learning algorithm trained to match the lesion delineation by human experts. The algorithm utilizes decision trees along with spatial pre- and post-regularization to outline the lesion. As input data the algorithm takes images from 108 patients with acute anterior circulation stroke from the I-Know multicenter study. We divided the data into training and test data using leave-one-out cross validation to assess performance in independent patients. Performance was quantified by the Dice index. The median Dice coefficient of ATLAS algorithm was 0.6122, which was significantly higher than COMBAT Stroke, with a median Dice coefficient of 0.5636 (p < 0.0001 and the best possible performing methods based on thresholding of the diffusion weighted images (median Dice coefficient: 0.3951 or the apparent diffusion coefficient (median Dice coefficeint

7. Ruin probabilities

DEFF Research Database (Denmark)

Asmussen, Søren; Albrecher, Hansjörg

The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

8. Generalized Probability-Probability Plots

NARCIS (Netherlands)

Mushkudiani, N.A.; Einmahl, J.H.J.

2004-01-01

We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

9. Learning about Probability from Text and Tables: Do Color Coding and Labeling through an Interactive-User Interface Help?

Science.gov (United States)

Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.

2016-01-01

Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…

10. Fault tree handbook

International Nuclear Information System (INIS)

Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

1981-01-01

This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

11. Probability-1

CERN Document Server

Shiryaev, Albert N

2016-01-01

This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

12. Ignition Probability

Data.gov (United States)

Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...

13. Quantum Probabilities as Behavioral Probabilities

Directory of Open Access Journals (Sweden)

Vyacheslav I. Yukalov

2017-03-01

Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

14. Climate signal age effects in boreal tree-rings: Lessons to be learned for paleoclimatic reconstructions

Czech Academy of Sciences Publication Activity Database

Konter, O.; Büntgen, Ulf; Carrer, M.; Timonen, M.; Esper, J.

2016-01-01

Roč. 142, JUN (2016), s. 164-172 ISSN 0277-3791 Institutional support: RVO:67179843 Keywords : temperature variability * Age trends * Dendroclimatology * Growth-climate relationships * Maximum latewood density * Northern Scandinavia * Tree-ring width Subject RIV: EH - Ecology, Behaviour Impact factor: 4.797, year: 2016

15. Risk Probabilities

DEFF Research Database (Denmark)

Rojas-Nandayapa, Leonardo

Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... analytic expression for the distribution function of a sum of random variables. The presence of heavy-tailed random variables complicates the problem even more. The objective of this dissertation is to provide better approximations by means of sharp asymptotic expressions and Monte Carlo estimators...

16. ONE PROBABLE MECHANISM OF THE LEARNING-MEMORY DAMAGE BY LEAD: THE CHANGES OF NOS IN HIPPOCAMPUS

Institute of Scientific and Technical Information of China (English)

王静; 赵义; 杨章民; 张进; 李积胜; 司履生; 王一理

2003-01-01

Objective To study the effects of lead on the activity and expression of nitric oxide synthase (NOS) and relationship between the effects of lead on learning-memory and changes of NOS in subfields of hippocampus. Methods Y-maze test was used to study the effects of lead on ability of learning-memory; NADPH-d histochemistry and immunohistochemistry methods were used to investigate the changes of NOS in subfields of hippocampus. Results Compared with the control group, the ability of learning- memory in lead-exposed rats was significantly decreased (P＜0.05); the number of NOS positive neurons in CA1 region and dentate gyrus of lead-exposed rats was significantly decreased(P＜0.05), but no marked changes in CA3 region; the number of nNOS positive neurons in CA1 of lead-exposed rats was also significantly decreased(P＜0.05), but no obvious changes in CA3. Conclusion Lead could damage the ability of learning-memory in rats. Lead could decrease the activity and expression of NOS in hippocampus and had different effects on NOS in different subfields of hippocampus. The changes of NOS in hippocampus induced by lead may be the mechanism of the learning-memory damage by lead.

17. Counterexamples in probability

CERN Document Server

Stoyanov, Jordan M

2013-01-01

While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.

18. Probability theory

CERN Document Server

Dorogovtsev, A Ya; Skorokhod, A V; Silvestrov, D S; Skorokhod, A V

1997-01-01

This book of problems is intended for students in pure and applied mathematics. There are problems in traditional areas of probability theory and problems in the theory of stochastic processes, which has wide applications in the theory of automatic control, queuing and reliability theories, and in many other modern science and engineering fields. Answers to most of the problems are given, and the book provides hints and solutions for more complicated problems.

19. Application of the Classification Tree Model in Predicting Learner Dropout Behaviour in Open and Distance Learning

Science.gov (United States)

Yasmin, Dr.

2013-01-01

This paper demonstrates the meaningful application of learning analytics for determining dropout predictors in the context of open and distance learning in a large developing country. The study was conducted at the Directorate of Distance Education at the University of North Bengal, West Bengal, India. This study employed a quantitative research…

20. Spatial prediction of landslides using a hybrid machine learning approach based on Random Subspace and Classification and Regression Trees

Science.gov (United States)

Pham, Binh Thai; Prakash, Indra; Tien Bui, Dieu

2018-02-01

A hybrid machine learning approach of Random Subspace (RSS) and Classification And Regression Trees (CART) is proposed to develop a model named RSSCART for spatial prediction of landslides. This model is a combination of the RSS method which is known as an efficient ensemble technique and the CART which is a state of the art classifier. The Luc Yen district of Yen Bai province, a prominent landslide prone area of Viet Nam, was selected for the model development. Performance of the RSSCART model was evaluated through the Receiver Operating Characteristic (ROC) curve, statistical analysis methods, and the Chi Square test. Results were compared with other benchmark landslide models namely Support Vector Machines (SVM), single CART, Naïve Bayes Trees (NBT), and Logistic Regression (LR). In the development of model, ten important landslide affecting factors related with geomorphology, geology and geo-environment were considered namely slope angles, elevation, slope aspect, curvature, lithology, distance to faults, distance to rivers, distance to roads, and rainfall. Performance of the RSSCART model (AUC = 0.841) is the best compared with other popular landslide models namely SVM (0.835), single CART (0.822), NBT (0.821), and LR (0.723). These results indicate that performance of the RSSCART is a promising method for spatial landslide prediction.

1. Equilibrium point control of a monkey arm simulator by a fast learning tree structured artificial neural network.

Science.gov (United States)

Dornay, M; Sanger, T D

1993-01-01

A planar 17 muscle model of the monkey's arm based on realistic biomechanical measurements was simulated on a Symbolics Lisp Machine. The simulator implements the equilibrium point hypothesis for the control of arm movements. Given initial and final desired positions, it generates a minimum-jerk desired trajectory of the hand and uses the backdriving algorithm to determine an appropriate sequence of motor commands to the muscles (Flash 1987; Mussa-Ivaldi et al. 1991; Dornay 1991b). These motor commands specify a temporal sequence of stable (attractive) equilibrium positions which lead to the desired hand movement. A strong disadvantage of the simulator is that it has no memory of previous computations. Determining the desired trajectory using the minimum-jerk model is instantaneous, but the laborious backdriving algorithm is slow, and can take up to one hour for some trajectories. The complexity of the required computations makes it a poor model for biological motor control. We propose a computationally simpler and more biologically plausible method for control which achieves the benefits of the backdriving algorithm. A fast learning, tree-structured network (Sanger 1991c) was trained to remember the knowledge obtained by the backdriving algorithm. The neural network learned the nonlinear mapping from a 2-dimensional cartesian planar hand position (x,y) to a 17-dimensional motor command space (u1, . . ., u17). Learning 20 training trajectories, each composed of 26 sample points [[x,y], [u1, . . ., u17] took only 20 min on a Sun-4 Sparc workstation. After the learning stage, new, untrained test trajectories as well as the original trajectories of the hand were given to the neural network as input. The network calculated the required motor commands for these movements. The resulting movements were close to the desired ones for both the training and test cases.

2. Maximum Spanning Tree Model on Personalized Web Based Collaborative Learning in Web 3.0

OpenAIRE

Padma, S.; Seshasaayee, Ananthi

2012-01-01

Web 3.0 is an evolving extension of the current web environme bnt. Information in web 3.0 can be collaborated and communicated when queried. Web 3.0 architecture provides an excellent learning experience to the students. Web 3.0 is 3D, media centric and semantic. Web based learning has been on high in recent days. Web 3.0 has intelligent agents as tutors to collect and disseminate the answers to the queries by the students. Completely Interactive learner's query determine the customization of...

3. Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised

Science.gov (United States)

In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...

4. Evaluation of the probability of arrester failure in a high-voltage transmission line using a Q learning artificial neural network model

International Nuclear Information System (INIS)

Ekonomou, L; Karampelas, P; Vita, V; Chatzarakis, G E

2011-01-01

One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service

5. Evaluation of the probability of arrester failure in a high-voltage transmission line using a Q learning artificial neural network model

Science.gov (United States)

Ekonomou, L.; Karampelas, P.; Vita, V.; Chatzarakis, G. E.

2011-04-01

One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service.

6. Learning from history, predicting the future: the UK Dutch elm disease outbreak in relation to contemporary tree disease threats

Science.gov (United States)

Potter, Clive; Harwood, Tom; Knight, Jon; Tomlinson, Isobel

2011-01-01

Expanding international trade and increased transportation are heavily implicated in the growing threat posed by invasive pathogens to biodiversity and landscapes. With trees and woodland in the UK now facing threats from a number of disease systems, this paper looks to historical experience with the Dutch elm disease (DED) epidemic of the 1970s to see what can be learned about an outbreak and attempts to prevent, manage and control it. The paper draws on an interdisciplinary investigation into the history, biology and policy of the epidemic. It presents a reconstruction based on a spatial modelling exercise underpinned by archival research and interviews with individuals involved in the attempted management of the epidemic at the time. The paper explores what, if anything, might have been done to contain the outbreak and discusses the wider lessons for plant protection. Reading across to present-day biosecurity concerns, the paper looks at the current outbreak of ramorum blight in the UK and presents an analysis of the unfolding epidemiology and policy of this more recent, and potentially very serious, disease outbreak. The paper concludes by reflecting on the continuing contemporary relevance of the DED experience at an important juncture in the evolution of plant protection policy. PMID:21624917

7. Two Trees: Migrating Fault Trees to Decision Trees for Real Time Fault Detection on International Space Station

Science.gov (United States)

Lee, Charles; Alena, Richard L.; Robinson, Peter

2004-01-01

We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.

8. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

Science.gov (United States)

Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

2014-12-01

Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

9. System Learning via Exploratory Data Analysis: Seeing Both the Forest and the Trees

Science.gov (United States)

Habash Krause, L.

2014-12-01

As the amount of observational Earth and Space Science data grows, so does the need for learning and employing data analysis techniques that can extract meaningful information from those data. Space-based and ground-based data sources from all over the world are used to inform Earth and Space environment models. However, with such a large amount of data comes a need to organize those data in a way such that trends within the data are easily discernible. This can be tricky due to the interaction between physical processes that lead to partial correlation of variables or multiple interacting sources of causality. With the suite of Exploratory Data Analysis (EDA) data mining codes available at MSFC, we have the capability to analyze large, complex data sets and quantitatively identify fundamentally independent effects from consequential or derived effects. We have used these techniques to examine the accuracy of ionospheric climate models with respect to trends in ionospheric parameters and space weather effects. In particular, these codes have been used to 1) Provide summary "at-a-glance" surveys of large data sets through categorization and/or evolution over time to identify trends, distribution shapes, and outliers, 2) Discern the underlying "latent" variables which share common sources of causality, and 3) Establish a new set of basis vectors by computing Empirical Orthogonal Functions (EOFs) which represent the maximum amount of variance for each principal component. Some of these techniques are easily implemented in the classroom using standard MATLAB functions, some of the more advanced applications require the statistical toolbox, and applications to unique situations require more sophisiticated levels of programming. This paper will present an overview of the range of tools available and how they might be used for a variety of time series Earth and Space Science data sets. Examples of feature recognition from both 1D and 2D (e.g. imagery) time series data

10. Probability Aggregates in Probability Answer Set Programming

OpenAIRE

Saad, Emad

2013-01-01

Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

11. Scaling Qualitative Probability

OpenAIRE

Burgin, Mark

2017-01-01

There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...

12. Tree Transduction Tools for Cdec

Directory of Open Access Journals (Sweden)

Austin Matthews

2014-09-01

Full Text Available We describe a collection of open source tools for learning tree-to-string and tree-to-tree transducers and the extensions to the cdec decoder that enable translation with these. Our modular, easy-to-extend tools extract rules from trees or forests aligned to strings and trees subject to different structural constraints. A fast, multithreaded implementation of the Cohn and Blunsom (2009 model for extracting compact tree-to-string rules is also included. The implementation of the tree composition algorithm used by cdec is described, and translation quality and decoding time results are presented. Our experimental results add to the body of evidence suggesting that tree transducers are a compelling option for translation, particularly when decoding speed and translation model size are important.

13. On Probability Leakage

OpenAIRE

Briggs, William M.

2012-01-01

The probability leakage of model M with respect to evidence E is defined. Probability leakage is a kind of model error. It occurs when M implies that events $y$, which are impossible given E, have positive probability. Leakage does not imply model falsification. Models with probability leakage cannot be calibrated empirically. Regression models, which are ubiquitous in statistical practice, often evince probability leakage.

14. DeepSAT: A Deep Learning Approach to Tree-cover Delineation in 1-m NAIP Imagery for the Continental United States

Science.gov (United States)

Ganguly, S.; Basu, S.; Nemani, R. R.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

2016-12-01

High resolution tree cover classification maps are needed to increase the accuracy of current land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) tree cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agriculture Imagery Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with 60 million pixels) and has a total size of 65 terabytes for a single acquisition. Features extracted from the entire dataset would amount to 8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. Using the NASA Earth Exchange (NEX) initiative, we have developed an end-to-end architecture by integrating a segmentation module based on Statistical Region Merging, a classification algorithm using Deep Belief Network and a structured prediction algorithm using Conditional Random Fields to integrate the results from the segmentation and classification modules to create per-pixel class labels. The training process is scaled up using the power of GPUs and the prediction is scaled to quarter million NAIP tiles spanning the whole of Continental United States using the NEX HPC supercomputing cluster. An initial pilot over the

15. DeepSAT: A Deep Learning Approach to Tree-Cover Delineation in 1-m NAIP Imagery for the Continental United States

Science.gov (United States)

Ganguly, Sangram; Basu, Saikat; Nemani, Ramakrishna R.; Mukhopadhyay, Supratik; Michaelis, Andrew; Votava, Petr

2016-01-01

High resolution tree cover classification maps are needed to increase the accuracy of current land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) tree cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agriculture Imagery Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with 60 million pixels) and has a total size of 65 terabytes for a single acquisition. Features extracted from the entire dataset would amount to 8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. Using the NASA Earth Exchange (NEX) initiative, we have developed an end-to-end architecture by integrating a segmentation module based on Statistical Region Merging, a classification algorithm using Deep Belief Network and a structured prediction algorithm using Conditional Random Fields to integrate the results from the segmentation and classification modules to create per-pixel class labels. The training process is scaled up using the power of GPUs and the prediction is scaled to quarter million NAIP tiles spanning the whole of Continental United States using the NEX HPC supercomputing cluster. An initial pilot over the

16. Identifying the rooted species tree from the distribution of unrooted gene trees under the coalescent.

Science.gov (United States)

Allman, Elizabeth S; Degnan, James H; Rhodes, John A

2011-06-01

Gene trees are evolutionary trees representing the ancestry of genes sampled from multiple populations. Species trees represent populations of individuals-each with many genes-splitting into new populations or species. The coalescent process, which models ancestry of gene copies within populations, is often used to model the probability distribution of gene trees given a fixed species tree. This multispecies coalescent model provides a framework for phylogeneticists to infer species trees from gene trees using maximum likelihood or Bayesian approaches. Because the coalescent models a branching process over time, all trees are typically assumed to be rooted in this setting. Often, however, gene trees inferred by traditional phylogenetic methods are unrooted. We investigate probabilities of unrooted gene trees under the multispecies coalescent model. We show that when there are four species with one gene sampled per species, the distribution of unrooted gene tree topologies identifies the unrooted species tree topology and some, but not all, information in the species tree edges (branch lengths). The location of the root on the species tree is not identifiable in this situation. However, for 5 or more species with one gene sampled per species, we show that the distribution of unrooted gene tree topologies identifies the rooted species tree topology and all its internal branch lengths. The length of any pendant branch leading to a leaf of the species tree is also identifiable for any species from which more than one gene is sampled.

17. Flowering Trees

Indian Academy of Sciences (India)

Flowering Trees. Boswellia serrata Roxb. ex Colebr. (Indian Frankincense tree) of Burseraceae is a large-sized deciduous tree that is native to India. Bark is thin, greenish-ash-coloured that exfoliates into smooth papery flakes. Stem exudes pinkish resin ... Fruit is a three-valved capsule. A green gum-resin exudes from the ...

18. Flowering Trees

Indian Academy of Sciences (India)

IAS Admin

Flowering Trees. Ailanthus excelsa Roxb. (INDIAN TREE OF. HEAVEN) of Simaroubaceae is a lofty tree with large pinnately compound alternate leaves, which are ... inflorescences, unisexual and greenish-yellow. Fruits are winged, wings many-nerved. Wood is used in making match sticks. 1. Male flower; 2. Female flower.

19. Flowering Trees

Indian Academy of Sciences (India)

Flowering Trees. Gyrocarpus americanus Jacq. (Helicopter Tree) of Hernandiaceae is a moderate size deciduous tree that grows to about 12 m in height with a smooth, shining, greenish-white bark. The leaves are ovate, rarely irregularly ... flowers which are unpleasant smelling. Fruit is a woody nut with two long thin wings.

20. Flowering Trees

Indian Academy of Sciences (India)

More Details Fulltext PDF. Volume 8 Issue 8 August 2003 pp 112-112 Flowering Trees. Zizyphus jujuba Lam. of Rhamnaceae · More Details Fulltext PDF. Volume 8 Issue 9 September 2003 pp 97-97 Flowering Trees. Moringa oleifera · More Details Fulltext PDF. Volume 8 Issue 10 October 2003 pp 100-100 Flowering Trees.

1. Probability 1/e

Science.gov (United States)

Koo, Reginald; Jones, Martin L.

2011-01-01

Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

2. Probability an introduction

CERN Document Server

Goldberg, Samuel

1960-01-01

Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

3. Exploiting machine learning algorithms for tree species classification in a semiarid woodland using RapidEye image

CSIR Research Space (South Africa)

Adelabu, S

2013-11-01

Full Text Available in semiarid environments. In this study, we examined the suitability of 5-band RapidEye satellite data for the classification of five tree species in mopane woodland of Botswana using machine leaning algorithms with limited training samples. We performed...

4. Mathematical foundations of event trees

International Nuclear Information System (INIS)

Papazoglou, Ioannis A.

1998-01-01

A mathematical foundation from first principles of event trees is presented. The main objective of this formulation is to offer a formal basis for developing automated computer assisted construction techniques for event trees. The mathematical theory of event trees is based on the correspondence between the paths of the tree and the elements of the outcome space of a joint event. The concept of a basic cylinder set is introduced to describe joint event outcomes conditional on specific outcomes of basic events or unconditional on the outcome of basic events. The concept of outcome space partition is used to describe the minimum amount of information intended to be preserved by the event tree representation. These concepts form the basis for an algorithm for systematic search for and generation of the most compact (reduced) form of an event tree consistent with the minimum amount of information the tree should preserve. This mathematical foundation allows for the development of techniques for automated generation of event trees corresponding to joint events which are formally described through other types of graphical models. Such a technique has been developed for complex systems described by functional blocks and it is reported elsewhere. On the quantification issue of event trees, a formal definition of a probability space corresponding to the event tree outcomes is provided. Finally, a short discussion is offered on the relationship of the presented mathematical theory with the more general use of event trees in reliability analysis of dynamic systems

5. Quantum probability measures and tomographic probability densities

NARCIS (Netherlands)

Amosov, GG; Man'ko, [No Value

2004-01-01

Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the

6. Boosted decision trees and applications

International Nuclear Information System (INIS)

Coadou, Y.

2013-01-01

Decision trees are a machine learning technique more and more commonly used in high energy physics, while it has been widely used in the social sciences. After introducing the concepts of decision trees, this article focuses on its application in particle physics. (authors)

7. Tree compression with top trees

DEFF Research Database (Denmark)

Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

2013-01-01

We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

8. Tree compression with top trees

DEFF Research Database (Denmark)

Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

2015-01-01

We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

9. Risk estimation using probability machines

Science.gov (United States)

2014-01-01

Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

10. A Universal Phylogenetic Tree.

Science.gov (United States)

Offner, Susan

2001-01-01

Presents a universal phylogenetic tree suitable for use in high school and college-level biology classrooms. Illustrates the antiquity of life and that all life is related, even if it dates back 3.5 billion years. Reflects important evolutionary relationships and provides an exciting way to learn about the history of life. (SAH)

11. Integrating cyber attacks within fault trees

International Nuclear Information System (INIS)

Nai Fovino, Igor; Masera, Marcelo; De Cian, Alessio

2009-01-01

In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

12. Integrating cyber attacks within fault trees

Energy Technology Data Exchange (ETDEWEB)

Nai Fovino, Igor [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy)], E-mail: igor.nai@jrc.it; Masera, Marcelo [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy); De Cian, Alessio [Department of Electrical Engineering, University di Genova, Genoa (Italy)

2009-09-15

In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

13. Toward a generalized probability theory: conditional probabilities

International Nuclear Information System (INIS)

Cassinelli, G.

1979-01-01

The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalized theory. (Auth.)

14. Dead Wolves, Dead Birds, and Dead Trees: Catalysts for Transformative Learning in the Making of Scientist-Environmentalists

Science.gov (United States)

Walter, Pierre

2013-01-01

This historical study identifies catalysts for transformative learning in the lives of three scientist-environmentalists important to the 20th-century environmental movement: Aldo Leopold, Rachel Carson, and David Suzuki. Following a brief review of theoretical perspectives on transformative learning, the article argues that transformative…

15. Probably not future prediction using probability and statistical inference

CERN Document Server

Dworsky, Lawrence N

2008-01-01

An engaging, entertaining, and informative introduction to probability and prediction in our everyday lives Although Probably Not deals with probability and statistics, it is not heavily mathematical and is not filled with complex derivations, proofs, and theoretical problem sets. This book unveils the world of statistics through questions such as what is known based upon the information at hand and what can be expected to happen. While learning essential concepts including "the confidence factor" and "random walks," readers will be entertained and intrigued as they move from chapter to chapter. Moreover, the author provides a foundation of basic principles to guide decision making in almost all facets of life including playing games, developing winning business strategies, and managing personal finances. Much of the book is organized around easy-to-follow examples that address common, everyday issues such as: How travel time is affected by congestion, driving speed, and traffic lights Why different gambling ...

16. Understanding the challenges of municipal tree planting

Science.gov (United States)

E.G. McPherson; R. Young

2010-01-01

Nine of the twelve largest cities in the U.S. have mayoral tree planting initiatives (TPIs), with pledges to plant nearly 20 million trees. Although executive-level support for trees has never been this widespread, many wonder if this support will endure as administrations change and budgets tighten. In an effort to share lessons learned from successes and setbacks, a...

17. Probability theory a foundational course

CERN Document Server

Pakshirajan, R P

2013-01-01

This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.

18. Interactive design of probability density functions for shape grammars

KAUST Repository

Dang, Minh; Lienhard, Stefan; Ceylan, Duygu; Neubert, Boris; Wonka, Peter; Pauly, Mark

2015-01-01

A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density

19. Philosophical theories of probability

CERN Document Server

Gillies, Donald

2000-01-01

The Twentieth Century has seen a dramatic rise in the use of probability and statistics in almost all fields of research. This has stimulated many new philosophical ideas on probability. Philosophical Theories of Probability is the first book to present a clear, comprehensive and systematic account of these various theories and to explain how they relate to one another. Gillies also offers a distinctive version of the propensity theory of probability, and the intersubjective interpretation, which develops the subjective theory.

20. Non-Archimedean Probability

NARCIS (Netherlands)

Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned

1. Interpretations of probability

CERN Document Server

Khrennikov, Andrei

2009-01-01

This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

2. Flowering Trees

Indian Academy of Sciences (India)

medium-sized handsome tree with a straight bole that branches at the top. Leaves are once pinnate, with two to three pairs of leaflets. Young parts of the tree are velvety. Inflorescence is a branched raceme borne at the branch ends. Flowers are large, white, attractive, and fragrant. Corolla is funnel-shaped. Fruit is an ...

3. Flowering Trees

Indian Academy of Sciences (India)

Cassia siamia Lamk. (Siamese tree senna) of Caesalpiniaceae is a small or medium size handsome tree. Leaves are alternate, pinnately compound and glandular, upto 18 cm long with 8–12 pairs of leaflets. Inflorescence is axillary or terminal and branched. Flowering lasts for a long period from March to February. Fruit is ...

4. Flowering Trees

Indian Academy of Sciences (India)

Flowering Trees. Cerbera manghasL. (SEA MANGO) of Apocynaceae is a medium-sized evergreen coastal tree with milky latex. The bark is grey-brown, thick and ... Fruit is large. (5–10 cm long), oval containing two flattened seeds and resembles a mango, hence the name Mangas or. Manghas. Leaves and fruits contain ...

5. Flowering Trees

Indian Academy of Sciences (India)

user

Flowering Trees. Gliricidia sepium(Jacq.) Kunta ex Walp. (Quickstick) of Fabaceae is a small deciduous tree with. Pinnately compound leaves. Flower are prroduced in large number in early summer on terminal racemes. They are attractive, pinkish-white and typically like bean flowers. Fruit is a few-seeded flat pod.

6. Flowering Trees

Indian Academy of Sciences (India)

Flowering Trees. Acrocarpus fraxinifolius Wight & Arn. (PINK CEDAR, AUSTRALIAN ASH) of. Caesalpiniaceae is a lofty unarmed deciduous native tree that attains a height of 30–60m with buttresses. Bark is thin and light grey. Leaves are compound and bright red when young. Flowers in dense, erect, axillary racemes.

7. Drawing Trees

DEFF Research Database (Denmark)

Halkjær From, Andreas; Schlichtkrull, Anders; Villadsen, Jørgen

2018-01-01

We formally prove in Isabelle/HOL two properties of an algorithm for laying out trees visually. The first property states that removing layout annotations recovers the original tree. The second property states that nodes are placed at least a unit of distance apart. We have yet to formalize three...

8. Flowering Trees

Indian Academy of Sciences (India)

Srimath

Grevillea robusta A. Cunn. ex R. Br. (Sil- ver Oak) of Proteaceae is a daintily lacy ornamental tree while young and growing into a mighty tree (45 m). Young shoots are silvery grey and the leaves are fern- like. Flowers are golden-yellow in one- sided racemes (10 cm). Fruit is a boat- shaped, woody follicle.

9. The Role of Cooperative Learning Type Team Assisted Individualization to Improve the Students' Mathematics Communication Ability in the Subject of Probability Theory

Science.gov (United States)

Tinungki, Georgina Maria

2015-01-01

The importance of learning mathematics can not be separated from its role in all aspects of life. Communicating ideas by using mathematics language is even more practical, systematic, and efficient. In order to overcome the difficulties of students who have insufficient understanding of mathematics material, good communications should be built in…

10. Tree manipulation experiment

Science.gov (United States)

Nishina, K.; Takenaka, C.; Ishizuka, S.; Hashimoto, S.; Yagai, Y.

2012-12-01

Some forest operations such as thinning and harvesting management could cause changes in N cycling and N2O emission from soils, since thinning and harvesting managements are accompanied with changes in aboveground environments such as an increase of slash falling and solar radiation on the forest floor. However, a considerable uncertainty exists in effects of thinning and harvesting on N2O fluxes regarding changes in belowground environments by cutting trees. To focus on the effect of changes in belowground environments on the N2O emissions from soils, we conducted a tree manipulation experiment in Japanese cedar (Cryptomeria japonica) stand without soil compaction and slash falling near the chambers and measured N2O flux at 50 cm and 150 cm distances from the tree trunk (stump) before and after cutting. We targeted 5 trees for the manipulation and established the measurement chambers to the 4 directions around each targeted tree relative to upper slope (upper, left, right, lower positions). We evaluated the effect of logging on the emission by using hierarchical Bayesian model. HB model can evaluate the variability in observed data and their uncertainties in the estimation with various probability distributions. Moreover, the HB model can easily accommodate the non-linear relationship among the N2O emissions and the environmental factors, and explicitly take non-independent data (nested structure of data) for the estimation into account by using random effects in the model. Our results showed tree cutting stimulated N2O emission from soils, and also that the increase of N2O flux depended on the distance from the trunk (stump): the increase of N2O flux at 50 cm from the trunk (stump) was greater than that of 150 cm from the trunk. The posterior simulation of the HB model indicated that the stimulation of N2O emission by tree cut- ting could reach up to 200 cm in our experimental plot. By tree cutting, the estimated N2O emission at 0-40 cm from the trunk doubled

11. Learning machines and sleeping brains: Automatic sleep stage classification using decision-tree multi-class support vector machines.

Science.gov (United States)

Lajnef, Tarek; Chaibi, Sahbi; Ruby, Perrine; Aguera, Pierre-Emmanuel; Eichenlaub, Jean-Baptiste; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim

2015-07-30

Sleep staging is a critical step in a range of electrophysiological signal processing pipelines used in clinical routine as well as in sleep research. Although the results currently achievable with automatic sleep staging methods are promising, there is need for improvement, especially given the time-consuming and tedious nature of visual sleep scoring. Here we propose a sleep staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. The performance of the method was evaluated using polysomnographic data from 15 subjects (electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) recordings). The decision tree, or dendrogram, was obtained using a hierarchical clustering technique and a wide range of time and frequency-domain features were extracted. Feature selection was carried out using forward sequential selection and classification was evaluated using k-fold cross-validation. The dendrogram-based SVM (DSVM) achieved mean specificity, sensitivity and overall accuracy of 0.92, 0.74 and 0.88 respectively, compared to expert visual scoring. Restricting DSVM classification to data where both experts' scoring was consistent (76.73% of the data) led to a mean specificity, sensitivity and overall accuracy of 0.94, 0.82 and 0.92 respectively. The DSVM framework outperforms classification with more standard multi-class "one-against-all" SVM and linear-discriminant analysis. The promising results of the proposed methodology suggest that it may be a valuable alternative to existing automatic methods and that it could accelerate visual scoring by providing a robust starting hypnogram that can be further fine-tuned by expert inspection. Copyright © 2015 Elsevier B.V. All rights reserved.

12. Dysphonic Voice Pattern Analysis of Patients in Parkinson’s Disease Using Minimum Interclass Probability Risk Feature Selection and Bagging Ensemble Learning Methods

Directory of Open Access Journals (Sweden)

Yunfeng Wu

2017-01-01

Full Text Available Analysis of quantified voice patterns is useful in the detection and assessment of dysphonia and related phonation disorders. In this paper, we first study the linear correlations between 22 voice parameters of fundamental frequency variability, amplitude variations, and nonlinear measures. The highly correlated vocal parameters are combined by using the linear discriminant analysis method. Based on the probability density functions estimated by the Parzen-window technique, we propose an interclass probability risk (ICPR method to select the vocal parameters with small ICPR values as dominant features and compare with the modified Kullback-Leibler divergence (MKLD feature selection approach. The experimental results show that the generalized logistic regression analysis (GLRA, support vector machine (SVM, and Bagging ensemble algorithm input with the ICPR features can provide better classification results than the same classifiers with the MKLD selected features. The SVM is much better at distinguishing normal vocal patterns with a specificity of 0.8542. Among the three classification methods, the Bagging ensemble algorithm with ICPR features can identify 90.77% vocal patterns, with the highest sensitivity of 0.9796 and largest area value of 0.9558 under the receiver operating characteristic curve. The classification results demonstrate the effectiveness of our feature selection and pattern analysis methods for dysphonic voice detection and measurement.

13. Phylogenetic trees

OpenAIRE

Baños, Hector; Bushek, Nathaniel; Davidson, Ruth; Gross, Elizabeth; Harris, Pamela E.; Krone, Robert; Long, Colby; Stewart, Allen; Walker, Robert

2016-01-01

We introduce the package PhylogeneticTrees for Macaulay2 which allows users to compute phylogenetic invariants for group-based tree models. We provide some background information on phylogenetic algebraic geometry and show how the package PhylogeneticTrees can be used to calculate a generating set for a phylogenetic ideal as well as a lower bound for its dimension. Finally, we show how methods within the package can be used to compute a generating set for the join of any two ideals.

14. Spatial Probability Cuing and Right Hemisphere Damage

Science.gov (United States)

Shaqiri, Albulena; Anderson, Britt

2012-01-01

In this experiment we studied statistical learning, inter-trial priming, and visual attention. We assessed healthy controls and right brain damaged (RBD) patients with and without neglect, on a simple visual discrimination task designed to measure priming effects and probability learning. All participants showed a preserved priming effect for item…

15. Multivariate analysis of flow cytometric data using decision trees.

Science.gov (United States)

Simon, Svenja; Guthke, Reinhard; Kamradt, Thomas; Frey, Oliver

2012-01-01

Characterization of the response of the host immune system is important in understanding the bidirectional interactions between the host and microbial pathogens. For research on the host site, flow cytometry has become one of the major tools in immunology. Advances in technology and reagents allow now the simultaneous assessment of multiple markers on a single cell level generating multidimensional data sets that require multivariate statistical analysis. We explored the explanatory power of the supervised machine learning method called "induction of decision trees" in flow cytometric data. In order to examine whether the production of a certain cytokine is depended on other cytokines, datasets from intracellular staining for six cytokines with complex patterns of co-expression were analyzed by induction of decision trees. After weighting the data according to their class probabilities, we created a total of 13,392 different decision trees for each given cytokine with different parameter settings. For a more realistic estimation of the decision trees' quality, we used stratified fivefold cross validation and chose the "best" tree according to a combination of different quality criteria. While some of the decision trees reflected previously known co-expression patterns, we found that the expression of some cytokines was not only dependent on the co-expression of others per se, but was also dependent on the intensity of expression. Thus, for the first time we successfully used induction of decision trees for the analysis of high dimensional flow cytometric data and demonstrated the feasibility of this method to reveal structural patterns in such data sets.

16. Foundations of probability

International Nuclear Information System (INIS)

Fraassen, B.C. van

1979-01-01

The interpretation of probabilities in physical theories are considered, whether quantum or classical. The following points are discussed 1) the functions P(μ, Q) in terms of which states and propositions can be represented, are classical (Kolmogoroff) probabilities, formally speaking, 2) these probabilities are generally interpreted as themselves conditional, and the conditions are mutually incompatible where the observables are maximal and 3) testing of the theory typically takes the form of confronting the expectation values of observable Q calculated with probability measures P(μ, Q) for states μ; hence, of comparing the probabilities P(μ, Q)(E) with the frequencies of occurrence of the corresponding events. It seems that even the interpretation of quantum mechanics, in so far as it concerns what the theory says about the empirical (i.e. actual, observable) phenomena, deals with the confrontation of classical probability measures with observable frequencies. This confrontation is studied. (Auth./C.F.)

17. The quantum probability calculus

International Nuclear Information System (INIS)

Jauch, J.M.

1976-01-01

The Wigner anomaly (1932) for the joint distribution of noncompatible observables is an indication that the classical probability calculus is not applicable for quantum probabilities. It should, therefore, be replaced by another, more general calculus, which is specifically adapted to quantal systems. In this article this calculus is exhibited and its mathematical axioms and the definitions of the basic concepts such as probability field, random variable, and expectation values are given. (B.R.H)

18. Choice Probability Generating Functions

DEFF Research Database (Denmark)

Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

19. Electron Tree

DEFF Research Database (Denmark)

Appelt, Ane L; Rønde, Heidi S

2013-01-01

The photo shows a close-up of a Lichtenberg figure – popularly called an “electron tree” – produced in a cylinder of polymethyl methacrylate (PMMA). Electron trees are created by irradiating a suitable insulating material, in this case PMMA, with an intense high energy electron beam. Upon discharge......, during dielectric breakdown in the material, the electrons generate branching chains of fractures on leaving the PMMA, producing the tree pattern seen. To be able to create electron trees with a clinical linear accelerator, one needs to access the primary electron beam used for photon treatments. We...... appropriated a linac that was being decommissioned in our department and dismantled the head to circumvent the target and ion chambers. This is one of 24 electron trees produced before we had to stop the fun and allow the rest of the accelerator to be disassembled....

20. Flowering Trees

Indian Academy of Sciences (India)

Srimath

shaped corolla. Fruit is large, ellipsoidal, green with a hard and smooth shell containing numerous flattened seeds, which are embedded in fleshy pulp. Calabash tree is commonly grown in the tropical gardens of the world as a botanical oddity.

1. Probability of satellite collision

Science.gov (United States)

Mccarter, J. W.

1972-01-01

A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

2. Choice probability generating functions

DEFF Research Database (Denmark)

Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

2013-01-01

This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

3. Handbook of probability

CERN Document Server

Florescu, Ionut

2013-01-01

THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

4. Real analysis and probability

CERN Document Server

Ash, Robert B; Lukacs, E

1972-01-01

Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

5. Helping HSE Team in Learning from Accident by Using the Management Oversight and Risk Tree Analysis Method

Directory of Open Access Journals (Sweden)

Iraj Mohammadfam

2016-09-01

Conclusion: The analysis using MORT method helped the organization with learning lessons from the accident especially at the management level. In order to prevent the similar and dissimilar accidents, the inappropriate informational network within the organization, inappropriate operational readiness, lack of proper implementation of work permit, the inappropriate and lack of updated technical information systems regarding equipments and working process, and the inappropriate barriers should be considered in a special way.

6. Introduction to probability

CERN Document Server

Freund, John E

1993-01-01

Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

7. Probability, Nondeterminism and Concurrency

DEFF Research Database (Denmark)

Varacca, Daniele

Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particula...

8. Janus-faced probability

CERN Document Server

Rocchi, Paolo

2014-01-01

The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.

9. Probability, statistics, and computational science.

Science.gov (United States)

Beerenwinkel, Niko; Siebourg, Juliane

2012-01-01

In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.

10. Relating phylogenetic trees to transmission trees of infectious disease outbreaks.

Science.gov (United States)

Ypma, Rolf J F; van Ballegooijen, W Marijn; Wallinga, Jacco

2013-11-01

Transmission events are the fundamental building blocks of the dynamics of any infectious disease. Much about the epidemiology of a disease can be learned when these individual transmission events are known or can be estimated. Such estimations are difficult and generally feasible only when detailed epidemiological data are available. The genealogy estimated from genetic sequences of sampled pathogens is another rich source of information on transmission history. Optimal inference of transmission events calls for the combination of genetic data and epidemiological data into one joint analysis. A key difficulty is that the transmission tree, which describes the transmission events between infected hosts, differs from the phylogenetic tree, which describes the ancestral relationships between pathogens sampled from these hosts. The trees differ both in timing of the internal nodes and in topology. These differences become more pronounced when a higher fraction of infected hosts is sampled. We show how the phylogenetic tree of sampled pathogens is related to the transmission tree of an outbreak of an infectious disease, by the within-host dynamics of pathogens. We provide a statistical framework to infer key epidemiological and mutational parameters by simultaneously estimating the phylogenetic tree and the transmission tree. We test the approach using simulations and illustrate its use on an outbreak of foot-and-mouth disease. The approach unifies existing methods in the emerging field of phylodynamics with transmission tree reconstruction methods that are used in infectious disease epidemiology.

11. Probability and Measure

CERN Document Server

Billingsley, Patrick

2012-01-01

Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this

12. The concept of probability

International Nuclear Information System (INIS)

Bitsakis, E.I.; Nicolaides, C.A.

1989-01-01

The concept of probability is now, and always has been, central to the debate on the interpretation of quantum mechanics. Furthermore, probability permeates all of science, as well as our every day life. The papers included in this volume, written by leading proponents of the ideas expressed, embrace a broad spectrum of thought and results: mathematical, physical epistemological, and experimental, both specific and general. The contributions are arranged in parts under the following headings: Following Schroedinger's thoughts; Probability and quantum mechanics; Aspects of the arguments on nonlocality; Bell's theorem and EPR correlations; Real or Gedanken experiments and their interpretation; Questions about irreversibility and stochasticity; and Epistemology, interpretation and culture. (author). refs.; figs.; tabs

13. Predicting student satisfaction with courses based on log data from a virtual learning environment – a neural network and classification tree model

Directory of Open Access Journals (Sweden)

Ivana Đurđević Babić

2015-03-01

Full Text Available Student satisfaction with courses in academic institutions is an important issue and is recognized as a form of support in ensuring effective and quality education, as well as enhancing student course experience. This paper investigates whether there is a connection between student satisfaction with courses and log data on student courses in a virtual learning environment. Furthermore, it explores whether a successful classification model for predicting student satisfaction with course can be developed based on course log data and compares the results obtained from implemented methods. The research was conducted at the Faculty of Education in Osijek and included analysis of log data and course satisfaction on a sample of third and fourth year students. Multilayer Perceptron (MLP with different activation functions and Radial Basis Function (RBF neural networks as well as classification tree models were developed, trained and tested in order to classify students into one of two categories of course satisfaction. Type I and type II errors, and input variable importance were used for model comparison and classification accuracy. The results indicate that a successful classification model using tested methods can be created. The MLP model provides the highest average classification accuracy and the lowest preference in misclassification of students with a low level of course satisfaction, although a t-test for the difference in proportions showed that the difference in performance between the compared models is not statistically significant. Student involvement in forum discussions is recognized as a valuable predictor of student satisfaction with courses in all observed models.

14. Machine-Learning Classifier for Patients with Major Depressive Disorder: Multifeature Approach Based on a High-Order Minimum Spanning Tree Functional Brain Network.

Science.gov (United States)

Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie

2017-01-01

High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.

15. Rooted triple consensus and anomalous gene trees

Directory of Open Access Journals (Sweden)

Schmidt Heiko A

2008-04-01

Full Text Available Abstract Background Anomalous gene trees (AGTs are gene trees with a topology different from a species tree that are more probable to observe than congruent gene trees. In this paper we propose a rooted triple approach to finding the correct species tree in the presence of AGTs. Results Based on simulated data we show that our method outperforms the extended majority rule consensus strategy, while still resolving the species tree. Applying both methods to a metazoan data set of 216 genes, we tested whether AGTs substantially interfere with the reconstruction of the metazoan phylogeny. Conclusion Evidence of AGTs was not found in this data set, suggesting that erroneously reconstructed gene trees are the most significant challenge in the reconstruction of phylogenetic relationships among species with current data. The new method does however rule out the erroneous reconstruction of deep or poorly resolved splits in the presence of lineage sorting.

16. Probability for statisticians

CERN Document Server

Shorack, Galen R

2017-01-01

This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

17. Concepts of probability theory

CERN Document Server

Pfeiffer, Paul E

1979-01-01

Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.

18. Probability and Bayesian statistics

CERN Document Server

1987-01-01

This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

19. Probability and Statistical Inference

OpenAIRE

Prosper, Harrison B.

2006-01-01

These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

20. Probabilities in physics

CERN Document Server

Hartmann, Stephan

2011-01-01

Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...

1. Probability an introduction

CERN Document Server

Grimmett, Geoffrey

2014-01-01

Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

2. Probability in physics

CERN Document Server

Hemmo, Meir

2012-01-01

What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive.

3. Probability in quantum mechanics

Directory of Open Access Journals (Sweden)

J. G. Gilson

1982-01-01

Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.

4. Quantum computing and probability.

Science.gov (United States)

Ferry, David K

2009-11-25

Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

5. Quantum computing and probability

International Nuclear Information System (INIS)

Ferry, David K

2009-01-01

Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction. (viewpoint)

6. A Comprehensive Probability Project for the Upper Division One-Semester Probability Course Using Yahtzee

Science.gov (United States)

Wilson, Jason; Lawman, Joshua; Murphy, Rachael; Nelson, Marissa

2011-01-01

This article describes a probability project used in an upper division, one-semester probability course with third-semester calculus and linear algebra prerequisites. The student learning outcome focused on developing the skills necessary for approaching project-sized math/stat application problems. These skills include appropriately defining…

7. Algorithms for Decision Tree Construction

KAUST Repository

Chikalov, Igor

2011-01-01

The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

8. Coalescent methods for estimating phylogenetic trees.

Science.gov (United States)

Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V

2009-10-01

We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.

9. Univariate decision tree induction using maximum margin classification

OpenAIRE

Yıldız, Olcay Taner

2012-01-01

In many pattern recognition applications, first decision trees are used due to their simplicity and easily interpretable nature. In this paper, we propose a new decision tree learning algorithm called univariate margin tree where, for each continuous attribute, the best split is found using convex optimization. Our simulation results on 47 data sets show that the novel margin tree classifier performs at least as good as C4.5 and linear discriminant tree (LDT) with a similar time complexity. F...

10. Plant a tree in cyberspace: metaphor and analogy as design elements in Web-based learning environments.

Science.gov (United States)

Wolfe, C R

2001-02-01

Analogy and metaphor are figurative forms of communication that help people integrate new information with prior knowledge to facilitate comprehension and appropriate inferences. The novelty and versatility of the Web place cognitive burdens on learners that can be overcome through the use of analogies and metaphors. This paper explores three uses of figurative communication as design elements in Web-based learning environments, and provides empirical illustrations of each. First, extended analogies can be used as the basis of cover stories that create an analogy between the learner's position and a hypothetical situation. The Dragonfly Web pages make extensive use of analogous cover stories in the design of interactive decision-making games. Feedback from visitors, patterns of usage, and external reviews provide evidence of effectiveness. A second approach is visual analogies based on the principles of ecological psychology. An empirical example suggests that visual analogies are most effective when there is a one-to-one correspondence between the base and visual target analogs. The use of learner-generated analogies is a third approach. Data from an offline study with undergraduate science students are presented indicating that generating analogies are associated with significant improvements in the ability to place events in natural history on a time line. It is concluded that cyberspace itself might form the basis of the next guiding metaphor of mind.

11. Flowering Trees

Indian Academy of Sciences (India)

deciduous tree with irregularly-shaped trunk, greyish-white scaly bark and milky latex. Leaves in opposite pairs are simple, oblong and whitish beneath. Flowers that occur in branched inflorescence are white, 2–. 3cm across and fragrant. Calyx is glandular inside. Petals bear numerous linear white scales, the corollary.

12. Flowering Trees

Indian Academy of Sciences (India)

Berrya cordifolia (Willd.) Burret (Syn. B. ammonilla Roxb.) – Trincomali Wood of Tiliaceae is a tall evergreen tree with straight trunk, smooth brownish-grey bark and simple broad leaves. Inflorescence is much branched with white flowers. Stamens are many with golden yellow anthers. Fruit is a capsule with six spreading ...

13. Flowering Trees

Indian Academy of Sciences (India)

Canthium parviflorum Lam. of Rubiaceae is a large shrub that often grows into a small tree with conspicuous spines. Leaves are simple, in pairs at each node and are shiny. Inflorescence is an axillary few-flowered cymose fascicle. Flowers are small (less than 1 cm across), 4-merous and greenish-white. Fruit is ellipsoid ...

14. Flowering Trees

Indian Academy of Sciences (India)

sriranga

Hook.f. ex Brandis (Yellow. Cadamba) of Rubiaceae is a large and handsome deciduous tree. Leaves are simple, large, orbicular, and drawn abruptly at the apex. Flowers are small, yellowish and aggregate into small spherical heads. The corolla is funnel-shaped with five stamens inserted at its mouth. Fruit is a capsule.

15. Flowering Trees

Indian Academy of Sciences (India)

Celtis tetrandra Roxb. of Ulmaceae is a moderately large handsome deciduous tree with green branchlets and grayish-brown bark. Leaves are simple with three to four secondary veins running parallel to the mid vein. Flowers are solitary, male, female and bisexual and inconspicuous. Fruit is berry-like, small and globose ...

16. Flowering Trees

Indian Academy of Sciences (India)

IAS Admin

Aglaia elaeagnoidea (A.Juss.) Benth. of Meliaceae is a small-sized evergreen tree of both moist and dry deciduous forests. The leaves are alternate and pinnately compound, terminating in a single leaflet. Leaflets are more or less elliptic with entire margin. Flowers are small on branched inflorescence. Fruit is a globose ...

17. Flowering Trees

Indian Academy of Sciences (India)

user

Flowers are borne on stiff bunches terminally on short shoots. They are 2-3 cm across, white, sweet-scented with light-brown hairy sepals and many stamens. Loquat fruits are round or pear-shaped, 3-5 cm long and are edible. A native of China, Loquat tree is grown in parks as an ornamental and also for its fruits.

18. Flowering Trees

Indian Academy of Sciences (India)

mid-sized slow-growing evergreen tree with spreading branches that form a dense crown. The bark is smooth, thick, dark and flakes off in large shreds. Leaves are thick, oblong, leathery and bright red when young. The female flowers are drooping and are larger than male flowers. Fruit is large, red in color and velvety.

19. Flowering Trees

Indian Academy of Sciences (India)

Andira inermis (wright) DC. , Dog Almond of Fabaceae is a handsome lofty evergreen tree. Leaves are alternate and pinnately compound with 4–7 pairs of leaflets. Flowers are fragrant and are borne on compact branched inflorescences. Fruit is ellipsoidal one-seeded drupe that is peculiar to members of this family.

20. Flowering Trees

Indian Academy of Sciences (India)

narrow towards base. Flowers are large and attrac- tive, but emit unpleasant foetid smell. They appear in small numbers on erect terminal clusters and open at night. Stamens are numerous, pink or white. Style is slender and long, terminating in a small stigma. Fruit is green, ovoid and indistinctly lobed. Flowering Trees.

1. Flowering Trees

Indian Academy of Sciences (India)

Muntingia calabura L. (Singapore cherry) of. Elaeocarpaceae is a medium size handsome ever- green tree. Leaves are simple and alternate with sticky hairs. Flowers are bisexual, bear numerous stamens, white in colour and arise in the leaf axils. Fruit is a berry, edible with several small seeds embedded in a fleshy pulp ...

2. ~{owering 'Trees

Indian Academy of Sciences (India)

. Stamens are fused into a purple staminal tube that is toothed. Fruit is about 0.5 in. across, nearly globose, generally 5-seeded, green but yellow when ripe, quite smooth at first but wrinkled in drying, remaining long on the tree ajier ripening.

3. Tree Mortality

Science.gov (United States)

Mark J. Ambrose

2012-01-01

Tree mortality is a natural process in all forest ecosystems. However, extremely high mortality also can be an indicator of forest health issues. On a regional scale, high mortality levels may indicate widespread insect or disease problems. High mortality may also occur if a large proportion of the forest in a particular region is made up of older, senescent stands....

4. Flowering Trees

Indian Academy of Sciences (India)

Guaiacum officinale L. (LIGNUM-VITAE) of Zygophyllaceae is a dense-crowned, squat, knobbly, rough and twisted medium-sized ev- ergreen tree with mottled bark. The wood is very hard and resinous. Leaves are compound. The leaflets are smooth, leathery, ovate-ellipti- cal and appear in two pairs. Flowers (about 1.5.

5. The perception of probability.

Science.gov (United States)

Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

2014-01-01

We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

6. Irreversibility and conditional probability

International Nuclear Information System (INIS)

Stuart, C.I.J.M.

1989-01-01

The mathematical entropy - unlike physical entropy - is simply a measure of uniformity for probability distributions in general. So understood, conditional entropies have the same logical structure as conditional probabilities. If, as is sometimes supposed, conditional probabilities are time-reversible, then so are conditional entropies and, paradoxically, both then share this symmetry with physical equations of motion. The paradox is, of course that probabilities yield a direction to time both in statistical mechanics and quantum mechanics, while the equations of motion do not. The supposed time-reversibility of both conditionals seems also to involve a form of retrocausality that is related to, but possibly not the same as, that described by Costa de Beaurgard. The retrocausality is paradoxically at odds with the generally presumed irreversibility of the quantum mechanical measurement process. Further paradox emerges if the supposed time-reversibility of the conditionals is linked with the idea that the thermodynamic entropy is the same thing as 'missing information' since this confounds the thermodynamic and mathematical entropies. However, it is shown that irreversibility is a formal consequence of conditional entropies and, hence, of conditional probabilities also. 8 refs. (Author)

7. The pleasures of probability

CERN Document Server

Isaac, Richard

1995-01-01

The ideas of probability are all around us. Lotteries, casino gambling, the al­ most non-stop polling which seems to mold public policy more and more­ these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re­ moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac­ ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...

8. Experimental Probability in Elementary School

Science.gov (United States)

Andrew, Lane

2009-01-01

Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

9. Improving Ranking Using Quantum Probability

OpenAIRE

Melucci, Massimo

2011-01-01

The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

10. Choice probability generating functions

DEFF Research Database (Denmark)

Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

2010-01-01

This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....

11. Probability and stochastic modeling

CERN Document Server

Rotar, Vladimir I

2012-01-01

Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

12. Collision Probability Analysis

DEFF Research Database (Denmark)

Hansen, Peter Friis; Pedersen, Preben Terndrup

1998-01-01

It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

13. Estimating Subjective Probabilities

DEFF Research Database (Denmark)

Andersen, Steffen; Fountain, John; Harrison, Glenn W.

2014-01-01

either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...

14. Introduction to imprecise probabilities

CERN Document Server

Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M

2014-01-01

In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin

15. Classic Problems of Probability

CERN Document Server

Gorroochurn, Prakash

2012-01-01

"A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin

16. Usefulness of problem tree, objective tree and logical framework ...

African Journals Online (AJOL)

The discussion has led to the conclusion that higher learning institutions are not adequately preparing graduates to face the increasing labor market demands in terms of skills and competitiveness. Having outlined the roots of the problem through the problem tree, the researchers proposed potential strategies to handle the ...

17. Decision tree modeling using R.

Science.gov (United States)

Zhang, Zhongheng

2016-08-01

In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

18. Epistemology and Probability

CERN Document Server

Plotnitsky, Arkady

2010-01-01

Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general

19. Transition probabilities for atoms

International Nuclear Information System (INIS)

Kim, Y.K.

1980-01-01

Current status of advanced theoretical methods for transition probabilities for atoms and ions is discussed. An experiment on the f values of the resonance transitions of the Kr and Xe isoelectronic sequences is suggested as a test for the theoretical methods

20. (Almost) practical tree codes

KAUST Repository

Khina, Anatoly

2016-08-15

We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

1. Spatial probability aids visual stimulus discrimination

Directory of Open Access Journals (Sweden)

Michael Druker

2010-08-01

Full Text Available We investigated whether the statistical predictability of a target's location would influence how quickly and accurately it was classified. Recent results have suggested that spatial probability can be a cue for the allocation of attention in visual search. One explanation for probability cuing is spatial repetition priming. In our two experiments we used probability distributions that were continuous across the display rather than relying on a few arbitrary screen locations. This produced fewer spatial repeats and allowed us to dissociate the effect of a high probability location from that of short-term spatial repetition. The task required participants to quickly judge the color of a single dot presented on a computer screen. In Experiment 1, targets were more probable in an off-center hotspot of high probability that gradually declined to a background rate. Targets garnered faster responses if they were near earlier target locations (priming and if they were near the high probability hotspot (probability cuing. In Experiment 2, target locations were chosen on three concentric circles around fixation. One circle contained 80% of targets. The value of this ring distribution is that it allowed for a spatially restricted high probability zone in which sequentially repeated trials were not likely to be physically close. Participant performance was sensitive to the high-probability circle in addition to the expected effects of eccentricity and the distance to recent targets. These two experiments suggest that inhomogeneities in spatial probability can be learned and used by participants on-line and without prompting as an aid for visual stimulus discrimination and that spatial repetition priming is not a sufficient explanation for this effect. Future models of attention should consider explicitly incorporating the probabilities of targets locations and features.

2. Predicting incomplete gene microarray data with the use of supervised learning algorithms

CSIR Research Space (South Africa)

Twala, B

2010-10-01

Full Text Available that prediction using supervised learning can be improved in probabilistic terms given incomplete microarray data. This imputation approach is based on the a priori probability of each value determined from the instances at that node of a decision tree (PDT...

3. Negative probability in the framework of combined probability

OpenAIRE

Burgin, Mark

2013-01-01

Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

4. Induction of Ordinal Decision Trees

NARCIS (Netherlands)

J.C. Bioch (Cor); V. Popova (Viara)

2003-01-01

textabstractThis paper focuses on the problem of monotone decision trees from the point of view of the multicriteria decision aid methodology (MCDA). By taking into account the preferences of the decision maker, an attempt is made to bring closer similar research within machine learning and MCDA.

5. Can Children Read Evolutionary Trees?

Science.gov (United States)

Ainsworth, Shaaron; Saffer, Jessica

2013-01-01

Representations of the "tree of life" such as cladograms show the history of lineages and their relationships. They are increasingly found in formal and informal learning settings. Unfortunately, there is evidence that these representations can be challenging to interpret correctly. This study explored the question of whether children…

6. Surface tree languages and parallel derivation trees

NARCIS (Netherlands)

Engelfriet, Joost

1976-01-01

The surface tree languages obtained by top-down finite state transformation of monadic trees are exactly the frontier-preserving homomorphic images of sets of derivation trees of ETOL systems. The corresponding class of tree transformation languages is therefore equal to the class of ETOL languages.

7. Automated Tree Crown Delineation and Biomass Estimation from Airborne LiDAR data: A Comparison of Statistical and Machine Learning Methods

Science.gov (United States)

Gleason, C. J.; Im, J.

2011-12-01

Airborne LiDAR remote sensing has been used effectively in assessing forest biomass because of its canopy penetrating effects and its ability to accurately describe the canopy surface. Current research in assessing biomass using airborne LiDAR focuses on either the individual tree as a base unit of study or statistical representations of a small aggregation of trees (i.e., plot level), and both methods usually rely on regression against field data to model the relationship between the LiDAR-derived data (e.g., volume) and biomass. This study estimates biomass for mixed forests and coniferous plantations (Picea Abies) within Heiberg Memorial Forest, Tully, NY, at both the plot and individual tree level. Plots are regularly spaced with a radius of 13m, and field data include diameter at breast height (dbh), tree height, and tree species. Field data collection and LiDAR data acquisition were seasonally coincident and both obtained in August of 2010. Resulting point cloud density was >5pts/m2. LiDAR data were processed to provide a canopy height surface, and a combination of watershed segmentation, active contouring, and genetic algorithm optimization was applied to delineate individual trees from the surface. This updated delineation method was shown to be more accurate than traditional watershed segmentation. Once trees had been delineated, four biomass estimation models were applied and compared: support vector regression (SVR), linear mixed effects regression (LME), random forest (RF), and Cubist regression. Candidate variables to be used in modeling were derived from the LiDAR surface, and include metrics of height, width, and volume per delineated tree footprint. Previously published allometric equations provided field estimates of biomass to inform the regressions and calculate their accuracy via leave-one-out cross validation. This study found that for forests such as found in the study area, aggregation of individual trees to form a plot-based estimate of

8. Contributions to quantum probability

International Nuclear Information System (INIS)

Fritz, Tobias

2010-01-01

Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a finite set can occur as the outcome

9. Bayesian Probability Theory

Science.gov (United States)

von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

2014-06-01

Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

10. Contributions to quantum probability

Energy Technology Data Exchange (ETDEWEB)

Fritz, Tobias

2010-06-25

Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a

11. Waste Package Misload Probability

International Nuclear Information System (INIS)

Knudsen, J.K.

2001-01-01

The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a

12. Probability theory and applications

CERN Document Server

Hsu, Elton P

1999-01-01

This volume, with contributions by leading experts in the field, is a collection of lecture notes of the six minicourses given at the IAS/Park City Summer Mathematics Institute. It introduces advanced graduates and researchers in probability theory to several of the currently active research areas in the field. Each course is self-contained with references and contains basic materials and recent results. Topics include interacting particle systems, percolation theory, analysis on path and loop spaces, and mathematical finance. The volume gives a balanced overview of the current status of probability theory. An extensive bibliography for further study and research is included. This unique collection presents several important areas of current research and a valuable survey reflecting the diversity of the field.

13. Paradoxes in probability theory

CERN Document Server

Eckhardt, William

2013-01-01

Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory.  Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies.  Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.

14. Measurement uncertainty and probability

CERN Document Server

Willink, Robin

2013-01-01

A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

15. Probability & Perception: The Representativeness Heuristic in Action

Science.gov (United States)

Lu, Yun; Vasko, Francis J.; Drummond, Trevor J.; Vasko, Lisa E.

2014-01-01

If the prospective students of probability lack a background in mathematical proofs, hands-on classroom activities may work well to help them to learn to analyze problems correctly. For example, students may physically roll a die twice to count and compare the frequency of the sequences. Tools such as graphing calculators or Microsoft Excel®…

16. Model uncertainty and probability

International Nuclear Information System (INIS)

Parry, G.W.

1994-01-01

This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example

17. Retrocausality and conditional probability

International Nuclear Information System (INIS)

Stuart, C.I.J.M.

1989-01-01

Costa de Beauregard has proposed that physical causality be identified with conditional probability. The proposal is shown to be vulnerable on two accounts. The first, though mathematically trivial, seems to be decisive so far as the current formulation of the proposal is concerned. The second lies in a physical inconsistency which seems to have its source in a Copenhagenlike disavowal of realism in quantum mechanics. 6 refs. (Author)

18. Probability via expectation

CERN Document Server

Whittle, Peter

1992-01-01

This book is a complete revision of the earlier work Probability which ap­ peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de­ manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...

19. Trees are good, but…

Science.gov (United States)

E.G. McPherson; F. Ferrini

2010-01-01

We know that âtrees are good,â and most people believe this to be true. But if this is so, why are so many trees neglected, and so many tree wells empty? An individualâs attitude toward trees may result from their firsthand encounters with specific trees. Understanding how attitudes about trees are shaped, particularly aversion to trees, is critical to the business of...

20. How to Identify and Interpret Evolutionary Tree Diagrams

Science.gov (United States)

Kong, Yi; Anderson, Trevor; Pelaez, Nancy

2016-01-01

Evolutionary trees are key tools for modern biology and are commonly portrayed in textbooks to promote learning about biological evolution. However, many people have difficulty in understanding what evolutionary trees are meant to portray. In fact, some ideas that current professional biologists depict with evolutionary trees are neither clearly…

1. Discrimination aware decision tree learning

NARCIS (Netherlands)

Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.

2010-01-01

Recently, the following problem of discrimination aware classification was introduced: given a labeled dataset and an attribute B, find a classifier with high predictive accuracy that at the same time does not discriminate on the basis of the given attribute B. This problem is motivated by the fact

2. Discrimination aware decision tree learning

NARCIS (Netherlands)

Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.

2010-01-01

Recently, the following discrimination aware classification problem was introduced: given a labeled dataset and an attribute B, find a classifier with high predictive accuracy that at the same time does not discriminate on the basis of the given attribute B. This problem is motivated by the fact

3. Probability mapping of contaminants

Energy Technology Data Exchange (ETDEWEB)

Rautman, C.A.; Kaplan, P.G. [Sandia National Labs., Albuquerque, NM (United States); McGraw, M.A. [Univ. of California, Berkeley, CA (United States); Istok, J.D. [Oregon State Univ., Corvallis, OR (United States); Sigda, J.M. [New Mexico Inst. of Mining and Technology, Socorro, NM (United States)

1994-04-01

Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds).

4. Probability mapping of contaminants

International Nuclear Information System (INIS)

Rautman, C.A.; Kaplan, P.G.; McGraw, M.A.; Istok, J.D.; Sigda, J.M.

1994-01-01

Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds)

5. Probability of causation approach

International Nuclear Information System (INIS)

Jose, D.E.

1988-01-01

Probability of causation (PC) is sometimes viewed as a great improvement by those persons who are not happy with the present rulings of courts in radiation cases. The author does not share that hope and expects that PC will not play a significant role in these issues for at least the next decade. If it is ever adopted in a legislative compensation scheme, it will be used in a way that is unlikely to please most scientists. Consequently, PC is a false hope for radiation scientists, and its best contribution may well lie in some of the spin-off effects, such as an influence on medical practice

6. Generalized Probability Functions

Directory of Open Access Journals (Sweden)

Alexandre Souto Martinez

2009-01-01

Full Text Available From the integration of nonsymmetrical hyperboles, a one-parameter generalization of the logarithmic function is obtained. Inverting this function, one obtains the generalized exponential function. Motivated by the mathematical curiosity, we show that these generalized functions are suitable to generalize some probability density functions (pdfs. A very reliable rank distribution can be conveniently described by the generalized exponential function. Finally, we turn the attention to the generalization of one- and two-tail stretched exponential functions. We obtain, as particular cases, the generalized error function, the Zipf-Mandelbrot pdf, the generalized Gaussian and Laplace pdf. Their cumulative functions and moments were also obtained analytically.

7. Probability in High Dimension

Science.gov (United States)

2014-06-30

precisely the content of the following result. The price we pay is that the assumption that A is a packing in (F, k ·k1) is too weak to make this happen...Regularité des trajectoires des fonctions aléatoires gaussiennes. In: École d’Été de Probabilités de Saint- Flour , IV-1974, pp. 1–96. Lecture Notes in...Lectures on probability theory and statistics (Saint- Flour , 1994), Lecture Notes in Math., vol. 1648, pp. 165–294. Springer, Berlin (1996) 50. Ledoux

8. Modular tree automata

DEFF Research Database (Denmark)

Bahr, Patrick

2012-01-01

Tree automata are traditionally used to study properties of tree languages and tree transformations. In this paper, we consider tree automata as the basis for modular and extensible recursion schemes. We show, using well-known techniques, how to derive from standard tree automata highly modular...

9. Simple street tree sampling

Science.gov (United States)

David J. Nowak; Jeffrey T. Walton; James Baldwin; Jerry. Bond

2015-01-01

Information on street trees is critical for management of this important resource. Sampling of street tree populations provides an efficient means to obtain street tree population information. Long-term repeat measures of street tree samples supply additional information on street tree changes and can be used to report damages from catastrophic events. Analyses of...

10. Interpreting CNNs via Decision Trees

OpenAIRE

Zhang, Quanshi; Yang, Yu; Wu, Ying Nian; Zhu, Song-Chun

2018-01-01

This paper presents a method to learn a decision tree to quantitatively explain the logic of each prediction of a pre-trained convolutional neural networks (CNNs). Our method boosts the following two aspects of network interpretability. 1) In the CNN, each filter in a high conv-layer must represent a specific object part, instead of describing mixed patterns without clear meanings. 2) People can explain each specific prediction made by the CNN at the semantic level using a decision tree, i.e....

11. Comprehensive decision tree models in bioinformatics.

Directory of Open Access Journals (Sweden)

Gregor Stiglic

Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets

12. Comprehensive decision tree models in bioinformatics.

Science.gov (United States)

Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

2012-01-01

Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly

13. On the number of vertices of each rank in phylogenetic trees and their generalizations

OpenAIRE

Bóna, Miklós

2015-01-01

We find surprisingly simple formulas for the limiting probability that the rank of a randomly selected vertex in a randomly selected phylogenetic tree or generalized phylogenetic tree is a given integer.

14. Probable maximum flood control

International Nuclear Information System (INIS)

DeGabriele, C.E.; Wu, C.L.

1991-11-01

This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

15. The Reliability and Stability of an Inferred Phylogenetic Tree from Empirical Data.

Science.gov (United States)

Katsura, Yukako; Stanley, Craig E; Kumar, Sudhir; Nei, Masatoshi

2017-03-01

The reliability of a phylogenetic tree obtained from empirical data is usually measured by the bootstrap probability (Pb) of interior branches of the tree. If the bootstrap probability is high for most branches, the tree is considered to be reliable. If some interior branches show relatively low bootstrap probabilities, we are not sure that the inferred tree is really reliable. Here, we propose another quantity measuring the reliability of the tree called the stability of a subtree. This quantity refers to the probability of obtaining a subtree (Ps) of an inferred tree obtained. We then show that if the tree is to be reliable, both Pb and Ps must be high. We also show that Ps is given by a bootstrap probability of the subtree with the closest outgroup sequence, and computer program RESTA for computing the Pb and Ps values will be presented. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

16. City of Pittsburgh Trees

Data.gov (United States)

Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Trees cared for and managed by the City of Pittsburgh Department of Public Works Forestry Division. Tree Benefits are calculated using the National Tree Benefit...

17. Modular representation and analysis of fault trees

Energy Technology Data Exchange (ETDEWEB)

Olmos, J; Wolf, L [Massachusetts Inst. of Tech., Cambridge (USA). Dept. of Nuclear Engineering

1978-08-01

An analytical method to describe fault tree diagrams in terms of their modular compositions is developed. Fault tree structures are characterized by recursively relating the top tree event to all its basic component inputs through a set of equations defining each of the modulus for the fault tree. It is shown that such a modular description is an extremely valuable tool for making a quantitative analysis of fault trees. The modularization methodology has been implemented into the PL-MOD computer code, written in PL/1 language, which is capable of modularizing fault trees containing replicated components and replicated modular gates. PL-MOD in addition can handle mutually exclusive inputs and explicit higher order symmetric (k-out-of-n) gates. The step-by-step modularization of fault trees performed by PL-MOD is demonstrated and it is shown how this procedure is only made possible through an extensive use of the list processing tools available in PL/1. A number of nuclear reactor safety system fault trees were analyzed. PL-MOD performed the modularization and evaluation of the modular occurrence probabilities and Vesely-Fussell importance measures for these systems very efficiently. In particular its execution time for the modularization of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using MOCUS, a code considered to be fast by present standards.

18. Tritium concentrations in tree ring cellulose

International Nuclear Information System (INIS)

Kaji, Toshio; Momoshima, Noriyuki; Takashima, Yoshimasa.

1989-01-01

Measurements of tritium (tissue bound tritium; TBT) concentration in tree rings are presented and discussed. Such measurement is expected to provide a useful means of estimating the tritium level in the environment in the past. The concentration of tritium bound in the tissue (TBT) in a tree ring considered to reflect the environmental tritium level in the area at the time of the formation of the ring, while the concentration of tritium in the free water in the tissue represents the current environmental tritium level. First, tritium concentration in tree ring cellulose sampled from a cedar tree grown in a typical environment in Fukuoka Prefecture is compared with the tritium concentration in precipitation in Tokyo. Results show that the year-to-year variations in the tritium concentration in the tree rings agree well with those in precipitation. The maximum concentration, which occurred in 1963, is attibuted to atmospheric nuclear testing which was performed frequently during the 1961 - 1963 period. Measurement is also made of the tritium concentration in tree ring cellulose sampled from a pine tree grown near the Isotope Center of Kyushu University (Fukuoka). Results indicate that the background level is higher probably due to the release of tritium from the facilities around the pine tree. Thus, measurement of tritium in tree ring cellulose clearly shows the year-to-year variation in the tritium concentration in the atmosphere. (N.K.)

19. Success tree method of resources evaluation

International Nuclear Information System (INIS)

Chen Qinglan; Sun Wenpeng

1994-01-01

By applying the reliability theory in system engineering, the success tree method is used to transfer the expert's recognition on metallogenetic regularities into the form of the success tree. The aim of resources evaluation is achieved by means of calculating the metallogenetic probability or favorability of the top event of the success tree. This article introduces in detail, the source, principle of the success tree method and three kinds of calculation methods, expounds concretely how to establish the success tree of comprehensive uranium metallogenesis as well as the procedure from which the resources evaluation is performed. Because this method has not restrictions on the number of known deposits and calculated area, it is applicable to resources evaluation for different mineral species, types and scales and possesses good prospects of development

20. Probability and rational choice

Directory of Open Access Journals (Sweden)

David Botting

2014-05-01

Full Text Available http://dx.doi.org/10.5007/1808-1711.2014v18n1p1 In this paper I will discuss the rationality of reasoning about the future. There are two things that we might like to know about the future: which hypotheses are true and what will happen next. To put it in philosophical language, I aim to show that there are methods by which inferring to a generalization (selecting a hypothesis and inferring to the next instance (singular predictive inference can be shown to be normative and the method itself shown to be rational, where this is due in part to being based on evidence (although not in the same way and in part on a prior rational choice. I will also argue that these two inferences have been confused, being distinct not only conceptually (as nobody disputes but also in their results (the value given to the probability of the hypothesis being not in general that given to the next instance and that methods that are adequate for one are not by themselves adequate for the other. A number of debates over method founder on this confusion and do not show what the debaters think they show.

1. Undergraduate Students’ Difficulties in Reading and Constructing Phylogenetic Tree

Science.gov (United States)

Sa'adah, S.; Tapilouw, F. S.; Hidayat, T.

2017-02-01

Representation is a very important communication tool to communicate scientific concepts. Biologists produce phylogenetic representation to express their understanding of evolutionary relationships. The phylogenetic tree is visual representation depict a hypothesis about the evolutionary relationship and widely used in the biological sciences. Phylogenetic tree currently growing for many disciplines in biology. Consequently, learning about phylogenetic tree become an important part of biological education and an interesting area for biology education research. However, research showed many students often struggle with interpreting the information that phylogenetic trees depict. The purpose of this study was to investigate undergraduate students’ difficulties in reading and constructing a phylogenetic tree. The method of this study is a descriptive method. In this study, we used questionnaires, interviews, multiple choice and open-ended questions, reflective journals and observations. The findings showed students experiencing difficulties, especially in constructing a phylogenetic tree. The students’ responds indicated that main reasons for difficulties in constructing a phylogenetic tree are difficult to placing taxa in a phylogenetic tree based on the data provided so that the phylogenetic tree constructed does not describe the actual evolutionary relationship (incorrect relatedness). Students also have difficulties in determining the sister group, character synapomorphy, autapomorphy from data provided (character table) and comparing among phylogenetic tree. According to them building the phylogenetic tree is more difficult than reading the phylogenetic tree. Finding this studies provide information to undergraduate instructor and students to overcome learning difficulties of reading and constructing phylogenetic tree.

2. Unrealistic phylogenetic trees may improve phylogenetic footprinting.

Science.gov (United States)

Nettling, Martin; Treutler, Hendrik; Cerquides, Jesus; Grosse, Ivo

2017-06-01

The computational investigation of DNA binding motifs from binding sites is one of the classic tasks in bioinformatics and a prerequisite for understanding gene regulation as a whole. Due to the development of sequencing technologies and the increasing number of available genomes, approaches based on phylogenetic footprinting become increasingly attractive. Phylogenetic footprinting requires phylogenetic trees with attached substitution probabilities for quantifying the evolution of binding sites, but these trees and substitution probabilities are typically not known and cannot be estimated easily. Here, we investigate the influence of phylogenetic trees with different substitution probabilities on the classification performance of phylogenetic footprinting using synthetic and real data. For synthetic data we find that the classification performance is highest when the substitution probability used for phylogenetic footprinting is similar to that used for data generation. For real data, however, we typically find that the classification performance of phylogenetic footprinting surprisingly increases with increasing substitution probabilities and is often highest for unrealistically high substitution probabilities close to one. This finding suggests that choosing realistic model assumptions might not always yield optimal predictions in general and that choosing unrealistically high substitution probabilities close to one might actually improve the classification performance of phylogenetic footprinting. The proposed PF is implemented in JAVA and can be downloaded from https://github.com/mgledi/PhyFoo. : martin.nettling@informatik.uni-halle.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

3. Implementation of Project-Based Learning (PjBL) through One Man One Tree to Improve Students' Attitude and Behavior to Support "Sekolah Adiwiyata"

Science.gov (United States)

Risnani; Sumarmi; Astina, I. Komang

2017-01-01

The attitude and behavior of the students of class XI-6 in relation to environmental awareness is very low. It proves that there is no student involvement in environmental conservation. The purpose of this study is to increase students' attitude and behavior related to environmental conservation using "One Man One Tree" Project Based…

4. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

Science.gov (United States)

O'Reilly, Joseph E; Donoghue, Philip C J

2018-03-01

Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

5. Multivariate analysis of ﬂow cytometric data using decision trees

Directory of Open Access Journals (Sweden)

Svenja eSimon

2012-04-01

Full Text Available Characterization of the response of the host immune system is important in understanding the bidirectional interactions between the host and microbial pathogens. For research on the host site, flow cytometry has become one of the major tools in immunology. Advances in technology and reagents allow now the simultaneous assessment of multiple markers on a single cell level generating multidimensional data sets that require multivariate statistical analysis. We explored the explanatory power of the supervised machine learning method called 'induction of decision trees' in flow cytometric data. In order to examine whether the production of a certain cytokine is depended on other cytokines, datasets from intracellular staining for six cytokines with complex patterns of co-expression were analyzed by induction of decision trees. After weighting the data according to their class probabilities, we created a total of 13,392 different decision trees for each given cytokine with different parameter settings. For a more realistic estimation of the decision trees's quality, we used stratified 5-fold cross-validation and chose the 'best' tree according to a combination of different quality criteria. While some of the decision trees reflected previously known co-expression patterns, we found that the expression of some cytokines was not only dependent on the co-expression of others per se, but was also dependent on the intensity of expression. Thus, for the first time we successfully used induction of decision trees for the analysis of high dimensional flow cytometric data and demonstrated the feasibility of this method to reveal structural patterns in such data sets.

6. COVAL, Compound Probability Distribution for Function of Probability Distribution

International Nuclear Information System (INIS)

Astolfi, M.; Elbaz, J.

1979-01-01

1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

7. Lognormal Approximations of Fault Tree Uncertainty Distributions.

Science.gov (United States)

El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

2018-01-26

Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

8. Pointwise probability reinforcements for robust statistical inference.

Science.gov (United States)

Frénay, Benoît; Verleysen, Michel

2014-02-01

Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.

9. Heart sounds analysis using probability assessment

Czech Academy of Sciences Publication Activity Database

Plešinger, Filip; Viščor, Ivo; Halámek, Josef; Jurčo, Juraj; Jurák, Pavel

2017-01-01

Roč. 38, č. 8 (2017), s. 1685-1700 ISSN 0967-3334 R&D Projects: GA ČR GAP102/12/2034; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : heart sounds * FFT * machine learning * signal averaging * probability assessment Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical engineering Impact factor: 2.058, year: 2016

10. From exemplar to grammar: a probabilistic analogy-based model of language learning.

Science.gov (United States)

Bod, Rens

2009-07-01

While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase-structure trees should be assigned to initial sentences, s/he allows (implicitly) for all possible trees and lets linguistic experience decide which is the "best" tree for each sentence. The best tree is obtained by maximizing "structural analogy" between a sentence and previous sentences, which is formalized by the most probable shortest combination of subtrees from all trees of previous sentences. Corpus-based experiments with this model on the Penn Treebank and the Childes database indicate that it can learn both exemplar-based and rule-based aspects of language, ranging from phrasal verbs to auxiliary fronting. By having learned the syntactic structures of sentences, we have also learned the grammar implicit in these structures, which can in turn be used to produce new sentences. We show that our model mimicks children's language development from item-based constructions to abstract constructions, and that the model can simulate some of the errors made by children in producing complex questions. Copyright © 2009 Cognitive Science Society, Inc.

11. Categorizing ideas about trees: a tree of trees.

Science.gov (United States)

Fisler, Marie; Lecointre, Guillaume

2013-01-01

The aim of this study is to explore whether matrices and MP trees used to produce systematic categories of organisms could be useful to produce categories of ideas in history of science. We study the history of the use of trees in systematics to represent the diversity of life from 1766 to 1991. We apply to those ideas a method inspired from coding homologous parts of organisms. We discretize conceptual parts of ideas, writings and drawings about trees contained in 41 main writings; we detect shared parts among authors and code them into a 91-characters matrix and use a tree representation to show who shares what with whom. In other words, we propose a hierarchical representation of the shared ideas about trees among authors: this produces a "tree of trees." Then, we categorize schools of tree-representations. Classical schools like "cladists" and "pheneticists" are recovered but others are not: "gradists" are separated into two blocks, one of them being called here "grade theoreticians." We propose new interesting categories like the "buffonian school," the "metaphoricians," and those using "strictly genealogical classifications." We consider that networks are not useful to represent shared ideas at the present step of the study. A cladogram is made for showing who is sharing what with whom, but also heterobathmy and homoplasy of characters. The present cladogram is not modelling processes of transmission of ideas about trees, and here it is mostly used to test for proximity of ideas of the same age and for categorization.

12. A Tale of Two Probabilities

Science.gov (United States)

Falk, Ruma; Kendig, Keith

2013-01-01

Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

13. Urban tree growth modeling

Science.gov (United States)

E. Gregory McPherson; Paula J. Peper

2012-01-01

This paper describes three long-term tree growth studies conducted to evaluate tree performance because repeated measurements of the same trees produce critical data for growth model calibration and validation. Several empirical and process-based approaches to modeling tree growth are reviewed. Modeling is more advanced in the fields of forestry and...

14. Keeping trees as assets

Science.gov (United States)

Kevin T. Smith

2009-01-01

Landscape trees have real value and contribute to making livable communities. Making the most of that value requires providing trees with the proper care and attention. As potentially large and long-lived organisms, trees benefit from commitment to regular care that respects the natural tree system. This system captures, transforms, and uses energy to survive, grow,...

15. Introduction to probability with R

CERN Document Server

Baclawski, Kenneth

2008-01-01

FOREWORD PREFACE Sets, Events, and Probability The Algebra of Sets The Bernoulli Sample Space The Algebra of Multisets The Concept of Probability Properties of Probability Measures Independent Events The Bernoulli Process The R Language Finite Processes The Basic Models Counting Rules Computing Factorials The Second Rule of Counting Computing Probabilities Discrete Random Variables The Bernoulli Process: Tossing a Coin The Bernoulli Process: Random Walk Independence and Joint Distributions Expectations The Inclusion-Exclusion Principle General Random Variable

16. A first course in probability

CERN Document Server

Ross, Sheldon

2014-01-01

A First Course in Probability, Ninth Edition, features clear and intuitive explanations of the mathematics of probability theory, outstanding problem sets, and a variety of diverse examples and applications. This book is ideal for an upper-level undergraduate or graduate level introduction to probability for math, science, engineering and business students. It assumes a background in elementary calculus.

17. Visualizing and Understanding Probability and Statistics: Graphical Simulations Using Excel

Science.gov (United States)

Gordon, Sheldon P.; Gordon, Florence S.

2009-01-01

The authors describe a collection of dynamic interactive simulations for teaching and learning most of the important ideas and techniques of introductory statistics and probability. The modules cover such topics as randomness, simulations of probability experiments such as coin flipping, dice rolling and general binomial experiments, a simulation…

18. Classification and regression trees

CERN Document Server

Breiman, Leo; Olshen, Richard A; Stone, Charles J

1984-01-01

The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

19. A brief introduction to probability.

Science.gov (United States)

Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

2018-02-01

The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

20. Phylogenomics reveal a robust fungal tree of life

NARCIS (Netherlands)

Kuramae, Eiko E.; Robert, Vincent; Snel, Berend; Weiß, Michael; Boekhout, Teun

2006-01-01

Our understanding of the tree of life (TOL) is still fragmentary. Until recently, molecular phylogeneticists have built trees based on ribosomal RNA sequences and selected protein sequences, which, however, usually suffered from lack of support for the deeper branches and inconsistencies probably

1. Propensity, Probability, and Quantum Theory

Science.gov (United States)

Ballentine, Leslie E.

2016-08-01

Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

2. Fault tree analysis for urban flooding

NARCIS (Netherlands)

Ten Veldhuis, J.A.E.; Clemens, F.H.L.R.; Van Gelder, P.H.A.J.M.

2008-01-01

Traditional methods to evaluate flood risk mostly focus on storm events as the main cause of flooding. Fault tree analysis is a technique that is able to model all potential causes of flooding and to quantify both the overall probability of flooding and the contributions of all causes of flooding to

3. There's Life in Hazard Trees

Science.gov (United States)

Mary Torsello; Toni McLellan

The goals of hazard tree management programs are to maximize public safety and maintain a healthy sustainable tree resource. Although hazard tree management frequently targets removal of trees or parts of trees that attract wildlife, it can take into account a diversity of tree values. With just a little extra planning, hazard tree management can be highly beneficial...

4. Calculating method on human error probabilities considering influence of management and organization

International Nuclear Information System (INIS)

Gao Jia; Huang Xiangrui; Shen Zupei

1996-01-01

This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach

5. Prediction and probability in sciences

International Nuclear Information System (INIS)

Klein, E.; Sacquin, Y.

1998-01-01

This book reports the 7 presentations made at the third meeting 'physics and fundamental questions' whose theme was probability and prediction. The concept of probability that was invented to apprehend random phenomena has become an important branch of mathematics and its application range spreads from radioactivity to species evolution via cosmology or the management of very weak risks. The notion of probability is the basis of quantum mechanics and then is bound to the very nature of matter. The 7 topics are: - radioactivity and probability, - statistical and quantum fluctuations, - quantum mechanics as a generalized probability theory, - probability and the irrational efficiency of mathematics, - can we foresee the future of the universe?, - chance, eventuality and necessity in biology, - how to manage weak risks? (A.C.)

6. Applied probability and stochastic processes

CERN Document Server

Sumita, Ushio

1999-01-01

Applied Probability and Stochastic Processes is an edited work written in honor of Julien Keilson. This volume has attracted a host of scholars in applied probability, who have made major contributions to the field, and have written survey and state-of-the-art papers on a variety of applied probability topics, including, but not limited to: perturbation method, time reversible Markov chains, Poisson processes, Brownian techniques, Bayesian probability, optimal quality control, Markov decision processes, random matrices, queueing theory and a variety of applications of stochastic processes. The book has a mixture of theoretical, algorithmic, and application chapters providing examples of the cutting-edge work that Professor Keilson has done or influenced over the course of his highly-productive and energetic career in applied probability and stochastic processes. The book will be of interest to academic researchers, students, and industrial practitioners who seek to use the mathematics of applied probability i...

7. Covering tree with stars

DEFF Research Database (Denmark)

Baumbach, Jan; Guo, Jiong; Ibragimov, Rashid

2015-01-01

We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting...... tree is isomorphic to T? We prove that in the general setting, CST is NP-complete, which implies that the tree edit distance considered here is also NP-hard, even when both input trees having diameters bounded by 10. We also show that, when the number of distinct stars is bounded by a constant k, CTS...

8. Posbist fault tree analysis of coherent systems

International Nuclear Information System (INIS)

Huang, H.-Z.; Tong Xin; Zuo, Ming J.

2004-01-01

When the failure probability of a system is extremely small or necessary statistical data from the system is scarce, it is very difficult or impossible to evaluate its reliability and safety with conventional fault tree analysis (FTA) techniques. New techniques are needed to predict and diagnose such a system's failures and evaluate its reliability and safety. In this paper, we first provide a concise overview of FTA. Then, based on the posbist reliability theory, event failure behavior is characterized in the context of possibility measures and the structure function of the posbist fault tree of a coherent system is defined. In addition, we define the AND operator and the OR operator based on the minimal cut of a posbist fault tree. Finally, a model of posbist fault tree analysis (posbist FTA) of coherent systems is presented. The use of the model for quantitative analysis is demonstrated with a real-life safety system

9. Poisson Processes in Free Probability

OpenAIRE

An, Guimei; Gao, Mingchu

2015-01-01

We prove a multidimensional Poisson limit theorem in free probability, and define joint free Poisson distributions in a non-commutative probability space. We define (compound) free Poisson process explicitly, similar to the definitions of (compound) Poisson processes in classical probability. We proved that the sum of finitely many freely independent compound free Poisson processes is a compound free Poisson processes. We give a step by step procedure for constructing a (compound) free Poisso...

10. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

Science.gov (United States)

We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

11. Analytical propagation of uncertainties through fault trees

International Nuclear Information System (INIS)

Hauptmanns, Ulrich

2002-01-01

A method is presented which enables one to propagate uncertainties described by uniform probability density functions through fault trees. The approach is analytical. It is based on calculating the expected value and the variance of the top event probability. These two parameters are then equated with the corresponding ones of a beta-distribution. An example calculation comparing the analytically calculated beta-pdf (probability density function) with the top event pdf obtained using the Monte-Carlo method shows excellent agreement at a much lower expense of computing time

12. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

Science.gov (United States)

Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

2012-01-01

In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

13. Human reliability analysis using event trees

International Nuclear Information System (INIS)

Heslinga, G.

1983-01-01

The shut-down procedure of a technologically complex installation as a nuclear power plant consists of a lot of human actions, some of which have to be performed several times. The procedure is regarded as a chain of modules of specific actions, some of which are analyzed separately. The analysis is carried out by making a Human Reliability Analysis event tree (HRA event tree) of each action, breaking down each action into small elementary steps. The application of event trees in human reliability analysis implies more difficulties than in the case of technical systems where event trees were mainly used until now. The most important reason is that the operator is able to recover a wrong performance; memory influences play a significant role. In this study these difficulties are dealt with theoretically. The following conclusions can be drawn: (1) in principle event trees may be used in human reliability analysis; (2) although in practice the operator will recover his fault partly, theoretically this can be described as starting the whole event tree again; (3) compact formulas have been derived, by which the probability of reaching a specific failure consequence on passing through the HRA event tree after several times of recovery is to be calculated. (orig.)

14. IND - THE IND DECISION TREE PACKAGE

Science.gov (United States)

Buntine, W.

1994-01-01

A common approach to supervised classification and prediction in artificial intelligence and statistical pattern recognition is the use of decision trees. A tree is "grown" from data using a recursive partitioning algorithm to create a tree which has good prediction of classes on new data. Standard algorithms are CART (by Breiman Friedman, Olshen and Stone) and ID3 and its successor C4 (by Quinlan). As well as reimplementing parts of these algorithms and offering experimental control suites, IND also introduces Bayesian and MML methods and more sophisticated search in growing trees. These produce more accurate class probability estimates that are important in applications like diagnosis. IND is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or it may be omitted. One of the attributes is delegated the "target" and IND grows trees to predict the target. Prediction can then be done on new data or the decision tree printed out for inspection. IND provides a range of features and styles with convenience for the casual user as well as fine-tuning for the advanced user or those interested in research. IND can be operated in a CART-like mode (but without regression trees, surrogate splits or multivariate splits), and in a mode like the early version of C4. Advanced features allow more extensive search, interactive control and display of tree growing, and Bayesian and MML algorithms for tree pruning and smoothing. These often produce more accurate class probability estimates at the leaves. IND also comes with a comprehensive experimental control suite. IND consists of four basic kinds of routines: data manipulation routines, tree generation routines, tree testing routines, and tree display routines. The data manipulation routines are used to partition a single large data set into smaller training and test sets. The

15. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

Science.gov (United States)

Wu, Yufeng

2012-03-01

Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

16. The Problem of Predecessors on Spanning Trees

Directory of Open Access Journals (Sweden)

V. S. Poghosyan

2011-01-01

Full Text Available We consider the equiprobable distribution of spanning trees on the square lattice. All bonds of each tree can be oriented uniquely with respect to an arbitrary chosen site called the root. The problem of predecessors is to find the probability that a path along the oriented bonds passes sequentially fixed sites i and j. The conformal field theory for the Potts model predicts the fractal dimension of the path to be 5/4. Using this result, we show that the probability in the predecessors problem for two sites separated by large distance r decreases as P(r ∼ r −3/4. If sites i and j are nearest neighbors on the square lattice, the probability P(1 = 5/16 can be found from the analytical theory developed for the sandpile model. The known equivalence between the loop erased random walk (LERW and the directed path on the spanning tree states that P(1 is the probability for the LERW started at i to reach the neighboring site j. By analogy with the self-avoiding walk, P(1 can be called the return probability. Extensive Monte-Carlo simulations confirm the theoretical predictions.

17. Unders and Overs: Using a Dice Game to Illustrate Basic Probability Concepts

Science.gov (United States)

McPherson, Sandra Hanson

2015-01-01

In this paper, the dice game "Unders and Overs" is described and presented as an active learning exercise to introduce basic probability concepts. The implementation of the exercise is outlined and the resulting presentation of various probability concepts are described.

18. Trees and highway safety.

Science.gov (United States)

2011-03-01

To minimize the severity of run-off-road collisions of vehicles with trees, departments of transportation (DOTs) : commonly establish clear zones for trees and other fixed objects. Caltrans clear zone on freeways is 30 feet : minimum (40 feet pref...

19. Distribution of cavity trees in midwestern old-growth and second-growth forests

Science.gov (United States)

Zhaofei Fan; Stephen R. Shifley; Martin A. Spetich; Frank R. Thompson; David R. Larsen

2003-01-01

We used classification and regression tree analysis to determine the primary variables associated with the occurrence of cavity trees and the hierarchical structure among those variables. We applied that information to develop logistic models predicting cavity tree probability as a function of diameter, species group, and decay class. Inventories of cavity abundance in...

20. Frankincense production is determined by tree size and tapping frequency and intensity

NARCIS (Netherlands)

Eshete, A.; Sterck, F.J.; Bongers, F.

2012-01-01

Resin production in trees probably depends on trade-offs within the tree, its environment and on tapping activities. Frankincense, the highly esteemed resin from dry woodland frankincense trees of Boswellia papyrifera is exploited in traditional ways for millennia. New exploitation practices lead to

1. Gap probability - Measurements and models of a pecan orchard

Science.gov (United States)

Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

1992-01-01

Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

2. Probability inequalities for decomposition integrals

Czech Academy of Sciences Publication Activity Database

Agahi, H.; Mesiar, Radko

2017-01-01

Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf

3. Expected utility with lower probabilities

DEFF Research Database (Denmark)

Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

1994-01-01

An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...

4. Invariant probabilities of transition functions

CERN Document Server

Zaharopol, Radu

2014-01-01

The structure of the set of all the invariant probabilities and the structure of various types of individual invariant probabilities of a transition function are two topics of significant interest in the theory of transition functions, and are studied in this book. The results obtained are useful in ergodic theory and the theory of dynamical systems, which, in turn, can be applied in various other areas (like number theory). They are illustrated using transition functions defined by flows, semiflows, and one-parameter convolution semigroups of probability measures. In this book, all results on transition probabilities that have been published by the author between 2004 and 2008 are extended to transition functions. The proofs of the results obtained are new. For transition functions that satisfy very general conditions the book describes an ergodic decomposition that provides relevant information on the structure of the corresponding set of invariant probabilities. Ergodic decomposition means a splitting of t...

5. Introduction to probability with Mathematica

CERN Document Server

Hastings, Kevin J

2009-01-01

Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

6. Linear positivity and virtual probability

International Nuclear Information System (INIS)

Hartle, James B.

2004-01-01

We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics

7. Minnesota's Forest Trees. Revised.

Science.gov (United States)

Miles, William R.; Fuller, Bruce L.

This bulletin describes 46 of the more common trees found in Minnesota's forests and windbreaks. The bulletin contains two tree keys, a summer key and a winter key, to help the reader identify these trees. Besides the two keys, the bulletin includes an introduction, instructions for key use, illustrations of leaf characteristics and twig…

8. D2-tree

DEFF Research Database (Denmark)

Brodal, Gerth Stølting; Sioutas, Spyros; Pantazos, Kostas

2015-01-01

We present a new overlay, called the Deterministic Decentralized tree (D2-tree). The D2-tree compares favorably to other overlays for the following reasons: (a) it provides matching and better complexities, which are deterministic for the supported operations; (b) the management of nodes (peers...

9. Covering tree with stars

DEFF Research Database (Denmark)

Baumbach, Jan; Guo, Jian-Ying; Ibragimov, Rashid

2013-01-01

We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting ...

10. Winter Birch Trees

Science.gov (United States)

Sweeney, Debra; Rounds, Judy

2011-01-01

Trees are great inspiration for artists. Many art teachers find themselves inspired and maybe somewhat obsessed with the natural beauty and elegance of the lofty tree, and how it changes through the seasons. One such tree that grows in several regions and always looks magnificent, regardless of the time of year, is the birch. In this article, the…

11. Total well dominated trees

DEFF Research Database (Denmark)

Finbow, Arthur; Frendrup, Allan; Vestergaard, Preben D.

cardinality then G is a total well dominated graph. In this paper we study composition and decomposition of total well dominated trees. By a reversible process we prove that any total well dominated tree can both be reduced to and constructed from a family of three small trees....

12. Building of fuzzy decision trees using ID3 algorithm

Science.gov (United States)

Begenova, S. B.; Avdeenko, T. V.

2018-05-01

Decision trees are widely used in the field of machine learning and artificial intelligence. Such popularity is due to the fact that with the help of decision trees graphic models, text rules can be built and they are easily understood by the final user. Because of the inaccuracy of observations, uncertainties, the data, collected in the environment, often take an unclear form. Therefore, fuzzy decision trees becoming popular in the field of machine learning. This article presents a method that includes the features of the two above-mentioned approaches: a graphical representation of the rules system in the form of a tree and a fuzzy representation of the data. The approach uses such advantages as high comprehensibility of decision trees and the ability to cope with inaccurate and uncertain information in fuzzy representation. The received learning method is suitable for classifying problems with both numerical and symbolic features. In the article, solution illustrations and numerical results are given.

13. STRIDE: Species Tree Root Inference from Gene Duplication Events.

Science.gov (United States)

Emms, David M; Kelly, Steven

2017-12-01

The correct interpretation of any phylogenetic tree is dependent on that tree being correctly rooted. We present STRIDE, a fast, effective, and outgroup-free method for identification of gene duplication events and species tree root inference in large-scale molecular phylogenetic analyses. STRIDE identifies sets of well-supported in-group gene duplication events from a set of unrooted gene trees, and analyses these events to infer a probability distribution over an unrooted species tree for the location of its root. We show that STRIDE correctly identifies the root of the species tree in multiple large-scale molecular phylogenetic data sets spanning a wide range of timescales and taxonomic groups. We demonstrate that the novel probability model implemented in STRIDE can accurately represent the ambiguity in species tree root assignment for data sets where information is limited. Furthermore, application of STRIDE to outgroup-free inference of the origin of the eukaryotic tree resulted in a root probability distribution that provides additional support for leading hypotheses for the origin of the eukaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

14. TreePics: visualizing trees with pictures

Directory of Open Access Journals (Sweden)

Nicolas Puillandre

2017-09-01

Full Text Available While many programs are available to edit phylogenetic trees, associating pictures with branch tips in an efficient and automatic way is not an available option. Here, we present TreePics, a standalone software that uses a web browser to visualize phylogenetic trees in Newick format and that associates pictures (typically, pictures of the voucher specimens to the tip of each branch. Pictures are visualized as thumbnails and can be enlarged by a mouse rollover. Further, several pictures can be selected and displayed in a separate window for visual comparison. TreePics works either online or in a full standalone version, where it can display trees with several thousands of pictures (depending on the memory available. We argue that TreePics can be particularly useful in a preliminary stage of research, such as to quickly detect conflicts between a DNA-based phylogenetic tree and morphological variation, that may be due to contamination that needs to be removed prior to final analyses, or the presence of species complexes.

15. Probable Inference and Quantum Mechanics

International Nuclear Information System (INIS)

Grandy, W. T. Jr.

2009-01-01

In its current very successful interpretation the quantum theory is fundamentally statistical in nature. Although commonly viewed as a probability amplitude whose (complex) square is a probability, the wavefunction or state vector continues to defy consensus as to its exact meaning, primarily because it is not a physical observable. Rather than approach this problem directly, it is suggested that it is first necessary to clarify the precise role of probability theory in quantum mechanics, either as applied to, or as an intrinsic part of the quantum theory. When all is said and done the unsurprising conclusion is that quantum mechanics does not constitute a logic and probability unto itself, but adheres to the long-established rules of classical probability theory while providing a means within itself for calculating the relevant probabilities. In addition, the wavefunction is seen to be a description of the quantum state assigned by an observer based on definite information, such that the same state must be assigned by any other observer based on the same information, in much the same way that probabilities are assigned.

16. Study on probability distribution of fire scenarios in risk assessment to emergency evacuation

International Nuclear Information System (INIS)

Chu Guanquan; Wang Jinhui

2012-01-01

Event tree analysis (ETA) is a frequently-used technique to analyze the probability of probable fire scenario. The event probability is usually characterized by definite value. It is not appropriate to use definite value as these estimates may be the result of poor quality statistics and limited knowledge. Without addressing uncertainties, ETA will give imprecise results. The credibility of risk assessment will be undermined. This paper presents an approach to address event probability uncertainties and analyze probability distribution of probable fire scenario. ETA is performed to construct probable fire scenarios. The activation time of every event is characterized as stochastic variable by considering uncertainties of fire growth rate and other input variables. To obtain probability distribution of probable fire scenario, Markov Chain is proposed to combine with ETA. To demonstrate the approach, a case study is presented.

17. Failure probability under parameter uncertainty.

Science.gov (United States)

Gerrard, R; Tsanakas, A

2011-05-01

In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

18. Indexing Density Models for Incremental Learning and Anytime Classification on Data Streams

DEFF Research Database (Denmark)

Seidl, Thomas; Assent, Ira; Kranen, Philipp

2009-01-01

Classification of streaming data faces three basic challenges: it has to deal with huge amounts of data, the varying time between two stream data items must be used best possible (anytime classification) and additional training data must be incrementally learned (anytime learning) for applying...... to the individual object to be classified) a hierarchy of mixture densities that represent kernel density estimators at successively coarser levels. Our probability density queries together with novel classification improvement strategies provide the necessary information for very effective classification at any...... point of interruption. Moreover, we propose a novel evaluation method for anytime classification using Poisson streams and demonstrate the anytime learning performance of the Bayes tree....

19. Probability with applications and R

CERN Document Server

Dobrow, Robert P

2013-01-01

An introduction to probability at the undergraduate level Chance and randomness are encountered on a daily basis. Authored by a highly qualified professor in the field, Probability: With Applications and R delves into the theories and applications essential to obtaining a thorough understanding of probability. With real-life examples and thoughtful exercises from fields as diverse as biology, computer science, cryptology, ecology, public health, and sports, the book is accessible for a variety of readers. The book's emphasis on simulation through the use of the popular R software language c

20. A philosophical essay on probabilities

CERN Document Server

Laplace, Marquis de

1996-01-01

A classic of science, this famous essay by ""the Newton of France"" introduces lay readers to the concepts and uses of probability theory. It is of especial interest today as an application of mathematical techniques to problems in social and biological sciences.Generally recognized as the founder of the modern phase of probability theory, Laplace here applies the principles and general results of his theory ""to the most important questions of life, which are, in effect, for the most part, problems in probability."" Thus, without the use of higher mathematics, he demonstrates the application

1. Seeing the Wood for the Trees: Applying the Dual-Memory System Model to Investigate Expert Teachers' Observational Skills in Natural Ecological Learning Environments

Science.gov (United States)

Stolpe, Karin; Bjorklund, Lars

2012-01-01

This study aims to investigate two expert ecology teachers' ability to attend to essential details in a complex environment during a field excursion, as well as how they teach this ability to their students. In applying a cognitive dual-memory system model for learning, we also suggest a rationale for their behaviour. The model implies two…

2. Dependencies in event trees analyzed by Petri nets

International Nuclear Information System (INIS)

Nývlt, Ondřej; Rausand, Marvin

2012-01-01

This paper discusses how non-marked Petri nets can be used to model and analyze event trees where the pivotal (branching) events are dependent and modeled by fault trees. The dependencies may, for example, be caused by shared utilities, shared components, or general common cause failures that are modeled by beta-factor models. These dependencies are cumbersome to take into account when using standard event-/fault tree modeling techniques, and may lead to significant errors in the calculated end-state probabilities of the event tree if they are not properly analyzed. A new approach is proposed in this paper, where the whole event tree is modeled by a non-marked Petri net and where P-invariants, representing the structural properties of the Petri net, are used to obtain the frequency of each end-state of the event tree with dependencies. The new approach is applied to a real example of an event tree analysis of the Strahov highway tunnel in Prague, Czech Republic, including two types of dependencies (shared Programmable Logic Controllers and Common Cause Failures). - Highlights: ► In this paper, we model and analyze event trees (ET) using Petri nets. ► The pivotal events of the modeled event trees are dependent (e.g., shared PLCs, CCF). ► A new method based on P-invariants to obtain probabilities of end states is proposed. ► Method is shown in the case study of the Stahov tunnel in the Czech Republic.

3. Root and Branch Reform: Teaching City Kids about Urban Trees

Science.gov (United States)

Walker, Mark

2017-01-01

In today's electronic age, suburban and city children are increasingly disconnected with the natural world. Studying trees allows children to learn about the world they live in and can teach a variety of useful topics contained within the National Curriculum in England. Knowledge of trees is specifically required in the science curriculum at key…

4. Spectra of chemical trees

International Nuclear Information System (INIS)

Balasubramanian, K.

1982-01-01

A method is developed for obtaining the spectra of trees of NMR and chemical interests. The characteristic polynomials of branched trees can be obtained in terms of the characteristic polynomials of unbranched trees and branches by pruning the tree at the joints. The unbranched trees can also be broken down further until a tree containing just two vertices is obtained. The effectively reduces the order of the secular determinant of the tree used at the beginning to determinants of orders atmost equal to the number of vertices in the branch containing the largest number of vertices. An illustrative example of a NMR graph is given for which the 22 x 22 secular determinant is reduced to determinants of orders atmost 4 x 4 in just the second step of the algorithm. The tree pruning algorithm can be applied even to trees with no symmetry elements and such a factoring can be achieved. Methods developed here can be elegantly used to find if two trees are cospectral and to construct cospectral trees

5. Refining discordant gene trees.

Science.gov (United States)

Górecki, Pawel; Eulenstein, Oliver

2014-01-01

Evolutionary studies are complicated by discordance between gene trees and the species tree in which they evolved. Dealing with discordant trees often relies on comparison costs between gene and species trees, including the well-established Robinson-Foulds, gene duplication, and deep coalescence costs. While these costs have provided credible results for binary rooted gene trees, corresponding cost definitions for non-binary unrooted gene trees, which are frequently occurring in practice, are challenged by biological realism. We propose a natural extension of the well-established costs for comparing unrooted and non-binary gene trees with rooted binary species trees using a binary refinement model. For the duplication cost we describe an efficient algorithm that is based on a linear time reduction and also computes an optimal rooted binary refinement of the given gene tree. Finally, we show that similar reductions lead to solutions for computing the deep coalescence and the Robinson-Foulds costs. Our binary refinement of Robinson-Foulds, gene duplication, and deep coalescence costs for unrooted and non-binary gene trees together with the linear time reductions provided here for computing these costs significantly extends the range of trees that can be incorporated into approaches dealing with discordance.

6. Logic, probability, and human reasoning.

Science.gov (United States)

Johnson-Laird, P N; Khemlani, Sangeet S; Goodwin, Geoffrey P

2015-04-01

This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn - not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

7. Free probability and random matrices

CERN Document Server

Mingo, James A

2017-01-01

This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

8. Introduction to probability and measure

CERN Document Server

Parthasarathy, K R

2005-01-01

According to a remark attributed to Mark Kac 'Probability Theory is a measure theory with a soul'. This book with its choice of proofs, remarks, examples and exercises has been prepared taking both these aesthetic and practical aspects into account.

9. Joint probabilities and quantum cognition

International Nuclear Information System (INIS)

Acacio de Barros, J.

2012-01-01

In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

10. Joint probabilities and quantum cognition

Energy Technology Data Exchange (ETDEWEB)

Acacio de Barros, J. [Liberal Studies, 1600 Holloway Ave., San Francisco State University, San Francisco, CA 94132 (United States)

2012-12-18

In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

11. Default probabilities and default correlations

OpenAIRE

Erlenmaier, Ulrich; Gersbach, Hans

2001-01-01

Starting from the Merton framework for firm defaults, we provide the analytics and robustness of the relationship between default correlations. We show that loans with higher default probabilities will not only have higher variances but also higher correlations between loans. As a consequence, portfolio standard deviation can increase substantially when loan default probabilities rise. This result has two important implications. First, relative prices of loans with different default probabili...

12. The Probabilities of Unique Events

Science.gov (United States)

2012-08-30

Washington, DC USA Max Lotstein and Phil Johnson-Laird Department of Psychology Princeton University Princeton, NJ USA August 30th 2012...social justice and also participated in antinuclear demonstrations. The participants ranked the probability that Linda is a feminist bank teller as...retorted that such a flagrant violation of the probability calculus was a result of a psychological experiment that obscured the rationality of the

13. Probability Matching, Fast and Slow

OpenAIRE

Koehler, Derek J.; James, Greta

2014-01-01

A prominent point of contention among researchers regarding the interpretation of probability-matching behavior is whether it represents a cognitively sophisticated, adaptive response to the inherent uncertainty of the tasks or settings in which it is observed, or whether instead it represents a fundamental shortcoming in the heuristics that support and guide human decision making. Put crudely, researchers disagree on whether probability matching is "smart" or "dumb." Here, we consider eviden...

14. LocTree3 prediction of localization

DEFF Research Database (Denmark)

Goldberg, T.; Hecht, M.; Hamp, T.

2014-01-01

The prediction of protein sub-cellular localization is an important step toward elucidating protein function. For each query protein sequence, LocTree2 applies machine learning (profile kernel SVM) to predict the native sub-cellular localization in 18 classes for eukaryotes, in six for bacteria a...

15. An intensive tree-ring experience

NARCIS (Netherlands)

Sánchez-Salguero, Raúl; Hevia, Andrea; Camarero, J.J.; Treydte, Kerstin; Frank, Dave; Crivellaro, Alan; Domínguez-Delmás, Marta; Hellman, Lena; Kaczka, Ryszard J.; Kaye, Margot; Akhmetzyanov, Linar; Ashiq, Muhammad Waseem; Bhuyan, Upasana; Bondarenko, Olesia; Camisón, Álvaro; Camps, Sien; García, Vicenta Constante; Vaz, Filipe Costa; Gavrila, Ionela G.; Gulbranson, Erik; Huhtamaa, Heli; Janecka, Karolina; Jeffers, Darren; Jochner, Matthias; Koutecký, Tomáš; Lamrani-Alaoui, Mostafa; Lebreton-Anberrée, Julie; Seijo, María Martín; Matulewski, Pawel; Metslaid, Sandra; Miron, Sergiu; Morrisey, Robert; Opdebeeck, Jorgen; Ovchinnikov, Svyatoslav; Peters, Richard; Petritan, Any M.; Popkova, Margarita; Rehorkova, Stepanka; Ariza, María O.R.; Sánchez-Miranda, Ángela; Linden, Van der Marjolein; Vannoppen, Astrid; Volařík, Daniel

2017-01-01

The European Dendroecologial Fieldweek (EDF) provides an intensive learning experience in tree-ring research that challenges any participant to explore new multidisciplinary dendro-sciences approaches within the context of field and laboratory settings. Here we present the 25th EDF, held in

16. Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance

Science.gov (United States)

Wang, Jian; Yang, Zhenwei; Kang, Mei

2018-01-01

This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.

17. A quantum probability model of causal reasoning

Directory of Open Access Journals (Sweden)

Jennifer S Trueblood

2012-05-01

Full Text Available People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause with diagnostic judgments (i.e., the conditional probability of a cause given an effect. The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment.

18. Information-theoretic methods for estimating of complicated probability distributions

CERN Document Server

Zong, Zhi

2006-01-01

Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

19. Tree-average distances on certain phylogenetic networks have their weights uniquely determined.

Science.gov (United States)

Willson, Stephen J

2012-01-01

A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.

20. The probable effect of integrated reporting on audit quality

Directory of Open Access Journals (Sweden)

Tamer A. El Nashar

2016-06-01

Full Text Available This paper examines a probable effect of integrated reporting on improving the audit quality of organizations. I correlate the hypothesis of this paper in relation to the current trends of protecting the economies, the financial markets and the societies. I predict an improvement of the audit quality, as a result to an estimated percentage of organizations’ reliance on the integrated reporting in their accountability perspective. I used a decision tree and a Bayes’ theorem approach, to predict the probabilities of the significant effect on improving the auditing quality. I find the overall result of this paper, indicates that the probability of organizations to rely on the integrated reporting by a significant percentage, predicts also a significant improvement in audit quality.

1. Gravity and count probabilities in an expanding universe

Science.gov (United States)

Bouchet, Francois R.; Hernquist, Lars

1992-01-01

The time evolution of nonlinear clustering on large scales in cold dark matter, hot dark matter, and white noise models of the universe is investigated using N-body simulations performed with a tree code. Count probabilities in cubic cells are determined as functions of the cell size and the clustering state (redshift), and comparisons are made with various theoretical models. We isolate the features that appear to be the result of gravitational instability, those that depend on the initial conditions, and those that are likely a consequence of numerical limitations. More specifically, we study the development of skewness, kurtosis, and the fifth moment in relation to variance, the dependence of the void probability on time as well as on sparseness of sampling, and the overall shape of the count probability distribution. Implications of our results for theoretical and observational studies are discussed.

2. Normal probability plots with confidence.

Science.gov (United States)

Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

2015-01-01

Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

3. A practical method for accurate quantification of large fault trees

International Nuclear Information System (INIS)

Choi, Jong Soo; Cho, Nam Zin

2007-01-01

This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

4. Classification tree for the assessment of sedentary lifestyle among hypertensive.

Science.gov (United States)

Castelo Guedes Martins, Larissa; Venícios de Oliveira Lopes, Marcos; Gomes Guedes, Nirla; Paixão de Menezes, Angélica; de Oliveira Farias, Odaleia; Alves Dos Santos, Naftale

2016-04-01

To develop a classification tree of clinical indicators for the correct prediction of the nursing diagnosis "Sedentary lifestyle" (SL) in people with high blood pressure (HTN). A cross-sectional study conducted in an outpatient care center specializing in high blood pressure and Mellitus diabetes located in northeastern Brazil. The sample consisted of 285 people between 19 and 59 years old diagnosed with high blood pressure and was applied an interview and physical examination, obtaining socio-demographic information, related factors and signs and symptoms that made the defining characteristics for the diagnosis under study. The tree was generated using the CHAID algorithm (Chi-square Automatic Interaction Detection). The construction of the decision tree allowed establishing the interactions between clinical indicators that facilitate a probabilistic analysis of multiple situations allowing quantify the probability of an individual presenting a sedentary lifestyle. The tree included the clinical indicator Choose daily routine without exercise as the first node. People with this indicator showed a probability of 0.88 of presenting the SL. The second node was composed of the indicator Does not perform physical activity during leisure, with 0.99 probability of presenting the SL with these two indicators. The predictive capacity of the tree was established at 69.5%. Decision trees help nurses who care HTN people in decision-making in assessing the characteristics that increase the probability of SL nursing diagnosis, optimizing the time for diagnostic inference.

5. Classification tree for the assessment of sedentary lifestyle among hypertensive

Directory of Open Access Journals (Sweden)

Larissa Castelo Guedes Martins

Full Text Available Objective.To develop a classification tree of clinical indicators for the correct prediction of the nursing diagnosis "Sedentary lifestyle" (SL in people with high blood pressure (HTN. Methods. A cross-sectional study conducted in an outpatient care center specializing in high blood pressure and Mellitus diabetes located in northeastern Brazil. The sample consisted of 285 people between 19 and 59 years old diagnosed with high blood pressure and was applied an interview and physical examination, obtaining socio-demographic information, related factors and signs and symptoms that made the defining characteristics for the diagnosis under study. The tree was generated using the CHAID algorithm (Chi-square Automatic Interaction Detection. Results. The construction of the decision tree allowed establishing the interactions between clinical indicators that facilitate a probabilistic analysis of multiple situations allowing quantify the probability of an individual presenting a sedentary lifestyle. The tree included the clinical indicator Choose daily routine without exercise as the first node. People with this indicator showed a probability of 0.88 of presenting the SL. The second node was composed of the indicator Does not perform physical activity during leisure, with 0.99 probability of presenting the SL with these two indicators. The predictive capacity of the tree was established at 69.5%. Conclusion. Decision trees help nurses who care HTN people in decision-making in assessing the characteristics that increase the probability of SL nursing diagnosis, optimizing the time for diagnostic inference.

6. On algorithm for building of optimal α-decision trees

KAUST Repository

Alkhalid, Abdulaziz

2010-01-01

The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

7. Comparison of greedy algorithms for α-decision tree construction

KAUST Repository

Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

2011-01-01

A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.

8. Disturbance legacies and climate jointly drive tree growth and mortality in an intensively studied boreal forest

Energy Technology Data Exchange (ETDEWEB)

Bond-Lamberty, Benjamin; Rocha, Adrian; Calvin, Katherine V.; Holmes, Bruce; Wang, Chuankuan; Goulden, Michael L.

2014-01-01

How will regional growth and mortality change with even relatively small climate shifts, even independent of catastrophic disturbances? This question is particularly acute for the North American boreal forest, which is carbon-dense and subject The goals of this study were to combine dendrochronological sampling, inventory records, and machine-learning algorithms to understand how tree growth and death have changed at one highly studied site (Northern Old Black Spruce, NOBS) in the central Canadian boreal forest. Over the 1999-2012 inventory period, mean DBH increased even as stand density and basal area declined significantly from 41.3 to 37.5 m2 ha-1. Tree mortality averaged 1.4±0.6% yr-1, with most mortality occurring in medium-sized trees. A combined tree ring chronology constructed from 2001, 2004, and 2012 sampling showed several periods of extreme growth depression, with increased mortality lagging depressed growth by ~5 years. Minimum and maximum air temperatures exerted a negative influence on tree growth, while precipitation and climate moisture index had a positive effect; both current- and previous-year data exerted significant effects. Models based on these variables explained 23-44% of the ring-width variability. There have been at least one, and probably two, significant recruitment episodes since stand initiation, and we infer that past climate extremes led to significant NOBS mortality still visible in the current forest structure. These results imply that a combination of successional and demographic processes, along with mortality driven by abiotic factors, continue to affect the stand, with significant implications for our understanding of previous work at NOBS and the sustainable management of regional forests.

9. The valuative tree

CERN Document Server

Favre, Charles

2004-01-01

This volume is devoted to a beautiful object, called the valuative tree and designed as a powerful tool for the study of singularities in two complex dimensions. Its intricate yet manageable structure can be analyzed by both algebraic and geometric means. Many types of singularities, including those of curves, ideals, and plurisubharmonic functions, can be encoded in terms of positive measures on the valuative tree. The construction of these measures uses a natural tree Laplace operator of independent interest.

10. VIBRATION ISOLATION SYSTEM PROBABILITY ANALYSIS

Directory of Open Access Journals (Sweden)

Smirnov Vladimir Alexandrovich

2012-10-01

Full Text Available The article deals with the probability analysis for a vibration isolation system of high-precision equipment, which is extremely sensitive to low-frequency oscillations even of submicron amplitude. The external sources of low-frequency vibrations may include the natural city background or internal low-frequency sources inside buildings (pedestrian activity, HVAC. Taking Gauss distribution into account, the author estimates the probability of the relative displacement of the isolated mass being still lower than the vibration criteria. This problem is being solved in the three dimensional space, evolved by the system parameters, including damping and natural frequency. According to this probability distribution, the chance of exceeding the vibration criteria for a vibration isolation system is evaluated. Optimal system parameters - damping and natural frequency - are being developed, thus the possibility of exceeding vibration criteria VC-E and VC-D is assumed to be less than 0.04.

11. Approximation methods in probability theory

CERN Document Server

Čekanavičius, Vydas

2016-01-01

This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

12. Does probability of occurrence relate to population dynamics?

Science.gov (United States)

Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M; Edwards, Thomas C; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E; Zurell, Damaris; Schurr, Frank M

2014-12-01

Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate ( r ) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions. The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density ( N ) relate to occurrence probability ( P occ ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, Western US, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments. Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with P occ , while N, and for most regions K, was generally positively correlated with P occ . Thus, in temperate forest trees the regions of highest occurrence

13. Does probability of occurrence relate to population dynamics?

Science.gov (United States)

Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.

2014-01-01

Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence

14. Fast Image Texture Classification Using Decision Trees

Science.gov (United States)

Thompson, David R.

2011-01-01

Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

15. Coded Splitting Tree Protocols

DEFF Research Database (Denmark)

Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

2013-01-01

This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

16. Morocco - Fruit Tree Productivity

Data.gov (United States)

Millennium Challenge Corporation — Date Tree Irrigation Project: The specific objectives of this evaluation are threefold: - Performance evaluation of project activities, like the mid-term evaluation,...

17. Model uncertainty: Probabilities for models?

International Nuclear Information System (INIS)

Winkler, R.L.

1994-01-01

Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

18. Knowledge typology for imprecise probabilities.

Energy Technology Data Exchange (ETDEWEB)

Wilson, G. D. (Gregory D.); Zucker, L. J. (Lauren J.)

2002-01-01

When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

19. Probability, Statistics, and Stochastic Processes

CERN Document Server

Olofsson, Peter

2011-01-01

A mathematical and intuitive approach to probability, statistics, and stochastic processes This textbook provides a unique, balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. This text combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The text begins with three chapters that d

20. Statistical probability tables CALENDF program

International Nuclear Information System (INIS)

Ribon, P.

1989-01-01

The purpose of the probability tables is: - to obtain dense data representation - to calculate integrals by quadratures. They are mainly used in the USA for calculations by Monte Carlo and in the USSR and Europe for self-shielding calculations by the sub-group method. The moment probability tables, in addition to providing a more substantial mathematical basis and calculation methods, are adapted for condensation and mixture calculations, which are the crucial operations for reactor physics specialists. However, their extension is limited by the statistical hypothesis they imply. Efforts are being made to remove this obstacle, at the cost, it must be said, of greater complexity

1. Probability, statistics, and queueing theory

CERN Document Server

Allen, Arnold O

1990-01-01

This is a textbook on applied probability and statistics with computer science applications for students at the upper undergraduate level. It may also be used as a self study book for the practicing computer science professional. The successful first edition of this book proved extremely useful to students who need to use probability, statistics and queueing theory to solve problems in other fields, such as engineering, physics, operations research, and management science. The book has also been successfully used for courses in queueing theory for operations research students. This second edit

2. Probability and Statistics: 5 Questions

DEFF Research Database (Denmark)

Probability and Statistics: 5 Questions is a collection of short interviews based on 5 questions presented to some of the most influential and prominent scholars in probability and statistics. We hear their views on the fields, aims, scopes, the future direction of research and how their work fits...... in these respects. Interviews with Nick Bingham, Luc Bovens, Terrence L. Fine, Haim Gaifman, Donald Gillies, James Hawthorne, Carl Hoefer, James M. Joyce, Joseph B. Kadane Isaac Levi, D.H. Mellor, Patrick Suppes, Jan von Plato, Carl Wagner, Sandy Zabell...

3. The Studies of Decision Tree in Estimation of Breast Cancer Risk by Using Polymorphism Nucleotide

Directory of Open Access Journals (Sweden)

Frida Seyedmir

2017-07-01

Full Text Available Abstract Introduction:   Decision tree is the data mining tools to collect, accurate prediction and sift information from massive amounts of data that are used widely in the field of computational biology and bioinformatics. In bioinformatics can be predict on diseases, including breast cancer. The use of genomic data including single nucleotide polymorphisms is a very important factor in predicting the risk of diseases. The number of seven important SNP among hundreds of thousands genetic markers were identified as factors associated with breast cancer. The objective of this study is to evaluate the training data on decision tree predictor error of the risk of breast cancer by using single nucleotide polymorphism genotype. Methods: The risk of breast cancer were calculated associated with the use of SNP formula:xj = fo * In human,  The decision tree can be used To predict the probability of disease using single nucleotide polymorphisms .Seven SNP with different odds ratio associated with breast cancer considered and coding and design of decision tree model, C4.5, by  Csharp2013 programming language were done. In the decision tree created with the coding, the four important associated SNP was considered. The decision tree error in two case of coding and using WEKA were assessment and percentage of decision tree accuracy in prediction of breast cancer were calculated. The number of trained samples was obtained with systematic sampling. With coding, two scenarios as well as software WEKA, three scenarios with different sets of data and the number of different learning and testing, were evaluated. Results: In both scenarios of coding, by increasing the training percentage from 66/66 to 86/42, the error reduced from 55/56 to 9/09. Also by running of WEKA on three scenarios with different sets of data, the number of different education, and different tests by increasing records number from 81 to 2187, the error rate decreased from 48/15 to 13

4. Sparse suffix tree construction in small space

DEFF Research Database (Denmark)

Bille, Philip; Fischer, Johannes; Gørtz, Inge Li

2013-01-01

the correct tree with high probability. We then give a Las-Vegas algorithm which also uses O(b) space and runs in the same time bounds with high probability when b = O(√n). Furthermore, additional tradeoffs between the space usage and the construction time for the Monte-Carlo algorithm are given......., which may be of independent interest, that allows to efficiently answer b longest common prefix queries on suffixes of T, using only O(b) space. We expect that this technique will prove useful in many other applications in which space usage is a concern. Our first solution is Monte-Carlo and outputs...

5. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

Science.gov (United States)

Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

2016-06-01

Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

6. Dynamic SEP event probability forecasts

Science.gov (United States)

Kahler, S. W.; Ling, A.

2015-10-01

The forecasting of solar energetic particle (SEP) event probabilities at Earth has been based primarily on the estimates of magnetic free energy in active regions and on the observations of peak fluxes and fluences of large (≥ M2) solar X-ray flares. These forecasts are typically issued for the next 24 h or with no definite expiration time, which can be deficient for time-critical operations when no SEP event appears following a large X-ray flare. It is therefore important to decrease the event probability forecast with time as a SEP event fails to appear. We use the NOAA listing of major (≥10 pfu) SEP events from 1976 to 2014 to plot the delay times from X-ray peaks to SEP threshold onsets as a function of solar source longitude. An algorithm is derived to decrease the SEP event probabilities with time when no event is observed to reach the 10 pfu threshold. In addition, we use known SEP event size distributions to modify probability forecasts when SEP intensity increases occur below the 10 pfu event threshold. An algorithm to provide a dynamic SEP event forecast, Pd, for both situations of SEP intensities following a large flare is derived.

7. Conditional Independence in Applied Probability.

Science.gov (United States)

Pfeiffer, Paul E.

This material assumes the user has the background provided by a good undergraduate course in applied probability. It is felt that introductory courses in calculus, linear algebra, and perhaps some differential equations should provide the requisite experience and proficiency with mathematical concepts, notation, and argument. The document is…

8. Stretching Probability Explorations with Geoboards

Science.gov (United States)

Wheeler, Ann; Champion, Joe

2016-01-01

Students are faced with many transitions in their middle school mathematics classes. To build knowledge, skills, and confidence in the key areas of algebra and geometry, students often need to practice using numbers and polygons in a variety of contexts. Teachers also want students to explore ideas from probability and statistics. Teachers know…

9. GPS: Geometry, Probability, and Statistics

Science.gov (United States)

Field, Mike

2012-01-01

It might be said that for most occupations there is now less of a need for mathematics than there was say fifty years ago. But, the author argues, geometry, probability, and statistics constitute essential knowledge for everyone. Maybe not the geometry of Euclid, but certainly geometrical ways of thinking that might enable us to describe the world…

10. Swedish earthquakes and acceleration probabilities

International Nuclear Information System (INIS)

Slunga, R.

1979-03-01

A method to assign probabilities to ground accelerations for Swedish sites is described. As hardly any nearfield instrumental data is available we are left with the problem of interpreting macroseismic data in terms of acceleration. By theoretical wave propagation computations the relation between seismic strength of the earthquake, focal depth, distance and ground accelerations are calculated. We found that most Swedish earthquake of the area, the 1904 earthquake 100 km south of Oslo, is an exception and probably had a focal depth exceeding 25 km. For the nuclear power plant sites an annual probability of 10 -5 has been proposed as interesting. This probability gives ground accelerations in the range 5-20 % for the sites. This acceleration is for a free bedrock site. For consistency all acceleration results in this study are given for bedrock sites. When applicating our model to the 1904 earthquake and assuming the focal zone to be in the lower crust we get the epicentral acceleration of this earthquake to be 5-15 % g. The results above are based on an analyses of macrosismic data as relevant instrumental data is lacking. However, the macroseismic acceleration model deduced in this study gives epicentral ground acceleration of small Swedish earthquakes in agreement with existent distant instrumental data. (author)

11. DECOFF Probabilities of Failed Operations

DEFF Research Database (Denmark)

Gintautas, Tomas

2015-01-01

A statistical procedure of estimation of Probabilities of Failed Operations is described and exemplified using ECMWF weather forecasts and SIMO output from Rotor Lift test case models. Also safety factor influence is investigated. DECOFF statistical method is benchmarked against standard Alpha-factor...

12. Probability and statistics: A reminder

International Nuclear Information System (INIS)

Clement, B.

2013-01-01

The main purpose of these lectures is to provide the reader with the tools needed to data analysis in the framework of physics experiments. Basic concepts are introduced together with examples of application in experimental physics. The lecture is divided into two parts: probability and statistics. It is build on the introduction from 'data analysis in experimental sciences' given in [1]. (authors)

13. Nash equilibrium with lower probabilities

DEFF Research Database (Denmark)

Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

1998-01-01

We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible...

14. On probability-possibility transformations

Science.gov (United States)

Klir, George J.; Parviz, Behzad

1992-01-01

Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.

15. XI Symposium on Probability and Stochastic Processes

CERN Document Server

Pardo, Juan; Rivero, Víctor; Bravo, Gerónimo

2015-01-01

This volume features lecture notes and a collection of contributed articles from the XI Symposium on Probability and Stochastic Processes, held at CIMAT Mexico in September 2013. Since the symposium was part of the activities organized in Mexico to celebrate the International Year of Statistics, the program included topics from the interface between statistics and stochastic processes. The book starts with notes from the mini-course given by Louigi Addario-Berry with an accessible description of some features of the multiplicative coalescent and its connection with random graphs and minimum spanning trees. It includes a number of exercises and a section on unanswered questions. Further contributions provide the reader with a broad perspective on the state-of-the art of active areas of research. Contributions by: Louigi Addario-Berry Octavio Arizmendi Fabrice Baudoin Jochen Blath Loïc Chaumont J. Armando Domínguez-Molina Bjarki Eldon Shui Feng Tulio Gaxiola Adrián González Casanova Evgueni Gordienko Daniel...

16. Are trees long-lived?

Science.gov (United States)

Kevin T. Smith

2009-01-01

Trees and tree care can capture the best of people's motivations and intentions. Trees are living memorials that help communities heal at sites of national tragedy, such as Oklahoma City and the World Trade Center. We mark the places of important historical events by the trees that grew nearby even if the original tree, such as the Charter Oak in Connecticut or...

17. Early evolution without a tree of life.

Science.gov (United States)

Martin, William F

2011-06-30

Life is a chemical reaction. Three major transitions in early evolution are considered without recourse to a tree of life. The origin of prokaryotes required a steady supply of energy and electrons, probably in the form of molecular hydrogen stemming from serpentinization. Microbial genome evolution is not a treelike process because of lateral gene transfer and the endosymbiotic origins of organelles. The lack of true intermediates in the prokaryote-to-eukaryote transition has a bioenergetic cause.

18. Diffusion on a disordered Cayley tree

International Nuclear Information System (INIS)

Brezini, A.; Olivier, G.

1983-08-01

The model proposed recently by Brezini to calculate the average probability and the average size of the localization domain for an electron being localized at a given site in a disordered Cayley tree, is extended to the case of a uniform distribution for site energies. Thus, numerical results are presented in the limit of weak disorder and particular attention is paid to the states near the mobility edge. (author)

19. Spacetime quantum probabilities II: Relativized descriptions and Popperian propensities

Science.gov (United States)

Mugur-Schächter, M.

1992-02-01

In the first part of this work(1) we have explicated the spacetime structure of the probabilistic organization of quantum mechanics. We have shown that each quantum mechanical state, in consequence of the spacetime characteristics of the epistemic operations by which the observer produces the state to be studied and the processes of qualification of these, brings in a tree-like spacetime structure, a “quantum mechanical probability tree,” that transgresses the theory of probabilities as it now stands. In this second part we develop the general implications of these results. Starting from the lowest level of cognitive action and creating an appropriate symbolism, we construct a “relativizing epistemic syntax,” a “general method of relativized conceptualization” where—systematically—each description is explicitly referred to the epistemic operations by which the observer produces the entity to be described and obtains qualifications of it. The method generates a typology of increasingly complex relativized descriptions where the question of realism admits of a particularly clear pronouncement. Inside this typology the epistemic processes that lie—UNIVERSALLY—at the basis of any conceptualization, reveal a tree-like spacetime structure. It appears in particular that the spacetime structure of the relativized representation of a probabilistic description, which transgresses the nowadays theory of probabilities, is the general mould of which the quantum mechanical probability trees are only particular realizations. This entails a clear definition of the descriptional status of quantum mechanics. While the recognition of the universal cognitive content of the quantum mechanical formalism opens up vistas toward mathematical developments of the relativizing epistemic syntax. The relativized representation of a probabilistic description leads with inner necessity to a “morphic” interpretation of probabilities that can be regarded as a formalized and

20. Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.

Science.gov (United States)

Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan

2015-03-01

A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.

1. A framework for sensitivity analysis of decision trees.

Science.gov (United States)

Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

2018-01-01

In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

2. Fragmentation of random trees

International Nuclear Information System (INIS)

Kalay, Z; Ben-Naim, E

2015-01-01

We study fragmentation of a random recursive tree into a forest by repeated removal of nodes. The initial tree consists of N nodes and it is generated by sequential addition of nodes with each new node attaching to a randomly-selected existing node. As nodes are removed from the tree, one at a time, the tree dissolves into an ensemble of separate trees, namely, a forest. We study statistical properties of trees and nodes in this heterogeneous forest, and find that the fraction of remaining nodes m characterizes the system in the limit N→∞. We obtain analytically the size density ϕ s of trees of size s. The size density has power-law tail ϕ s ∼s −α with exponent α=1+(1/m). Therefore, the tail becomes steeper as further nodes are removed, and the fragmentation process is unusual in that exponent α increases continuously with time. We also extend our analysis to the case where nodes are added as well as removed, and obtain the asymptotic size density for growing trees. (paper)

3. The tree BVOC index

Science.gov (United States)

J.R. Simpson; E.G. McPherson

2011-01-01

Urban trees can produce a number of benefits, among them improved air quality. Biogenic volatile organic compounds (BVOCs) emitted by some species are ozone precursors. Modifying future tree planting to favor lower-emitting species can reduce these emissions and aid air management districts in meeting federally mandated emissions reductions for these compounds. Changes...

4. Tree growth visualization

Science.gov (United States)

L. Linsen; B.J. Karis; E.G. McPherson; B. Hamann

2005-01-01

In computer graphics, models describing the fractal branching structure of trees typically exploit the modularity of tree structures. The models are based on local production rules, which are applied iteratively and simultaneously to create a complex branching system. The objective is to generate three-dimensional scenes of often many realistic- looking and non-...

5. Flowering T Flowering Trees

Indian Academy of Sciences (India)

Adansonia digitata L. ( The Baobab Tree) of Bombacaceae is a tree with swollen trunk that attains a dia. of 10m. Leaves are digitately compound with leaflets up to 18cm. long. Flowers are large, solitary, waxy white, and open at dusk. They open in 30 seconds and are bat pollinated. Stamens are many. Fruit is about 30 cm ...

6. Fault tree graphics

International Nuclear Information System (INIS)

Bass, L.; Wynholds, H.W.; Porterfield, W.R.

1975-01-01

Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

7. Tree biology and dendrochemistry

Science.gov (United States)

Kevin T. Smith; Walter C. Shortle

1996-01-01

Dendrochemistry, the interpretation of elemental analysis of dated tree rings, can provide a temporal record of environmental change. Using the dendrochemical record requires an understanding of tree biology. In this review, we pose four questions concerning assumptions that underlie recent dendrochemical research: 1) Does the chemical composition of the wood directly...

8. Individual tree control

Science.gov (United States)

Harvey A. Holt

1989-01-01

Controlling individual unwanted trees in forest stands is a readily accepted method for improving the value of future harvests. The practice is especially important in mixed hardwood forests where species differ considerably in value and within species individual trees differ in quality. Individual stem control is a mechanical or chemical weeding operation that...

9. Trees and Climate Change

OpenAIRE

Dettenmaier, Megan; Kuhns, Michael; Unger, Bethany; McAvoy, Darren

2017-01-01

This fact sheet describes the complex relationship between forests and climate change based on current research. It explains ways that trees can mitigate some of the risks associated with climate change. It details the impacts that forests are having on the changing climate and discuss specific ways that trees can be used to reduce or counter carbon emissions directly and indirectly.

10. Structural Equation Model Trees

Science.gov (United States)

Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

2013-01-01

In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

11. Matching Subsequences in Trees

DEFF Research Database (Denmark)

Bille, Philip; Gørtz, Inge Li

2009-01-01

Given two rooted, labeled trees P and T the tree path subsequence problem is to determine which paths in P are subsequences of which paths in T. Here a path begins at the root and ends at a leaf. In this paper we propose this problem as a useful query primitive for XML data, and provide new...

12. Environmental tritium in trees

International Nuclear Information System (INIS)

Brown, R.M.

1979-01-01

The distribution of environmental tritium in the free water and organically bound hydrogen of trees growing in the vicinity of the Chalk River Nuclear Laboratories (CRNL) has been studied. The regional dispersal of HTO in the atmosphere has been observed by surveying the tritium content of leaf moisture. Measurement of the distribution of organically bound tritium in the wood of tree ring sequences has given information on past concentrations of HTO taken up by trees growing in the CRNL Liquid Waste Disposal Area. For samples at background environmental levels, cellulose separation and analysis was done. The pattern of bomb tritium in precipitation of 1955-68 was observed to be preserved in the organically bound tritium of a tree ring sequence. Reactor tritium was discernible in a tree growing at a distance of 10 km from CRNL. These techniques provide convenient means of monitoring dispersal of HTO from nuclear facilities. (author)

13. Generalising tree traversals and tree transformations to DAGs

DEFF Research Database (Denmark)

Bahr, Patrick; Axelsson, Emil

2017-01-01

We present a recursion scheme based on attribute grammars that can be transparently applied to trees and acyclic graphs. Our recursion scheme allows the programmer to implement a tree traversal or a tree transformation and then apply it to compact graph representations of trees instead. The resul......We present a recursion scheme based on attribute grammars that can be transparently applied to trees and acyclic graphs. Our recursion scheme allows the programmer to implement a tree traversal or a tree transformation and then apply it to compact graph representations of trees instead...... as the complementing theory with a number of examples....

14. WAMCUT, a computer code for fault tree evaluation. Final report

International Nuclear Information System (INIS)

Erdmann, R.C.

1978-06-01

WAMCUT is a code in the WAM family which produces the minimum cut sets (MCS) for a given fault tree. The MCS are useful as they provide a qualitative evaluation of a system, as well as providing a means of determining the probability distribution function for the top of the tree. The program is very efficient and will produce all the MCS in a very short computer time span. 22 figures, 4 tables

15. Elastic K-means using posterior probability.

Science.gov (United States)

Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

2017-01-01

The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.

16. Tuned by experience: How orientation probability modulates early perceptual processing.

Science.gov (United States)

Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt

2017-09-01

Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive 'P300' component which might be related to either surprise or decision-making. However, the early 'C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention. Copyright © 2017 Elsevier Ltd. All rights reserved.

17. Large deviations and idempotent probability

CERN Document Server

Puhalskii, Anatolii

2001-01-01

In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...

18. Probability biases as Bayesian inference

Directory of Open Access Journals (Sweden)

Andre; C. R. Martins

2006-11-01

Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

19. Probability matching and strategy availability.

Science.gov (United States)

Koehler, Derek J; James, Greta

2010-09-01

Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.

20. Probability as a Physical Motive

Directory of Open Access Journals (Sweden)

Peter Martin

2007-04-01

Full Text Available Recent theoretical progress in nonequilibrium thermodynamics, linking thephysical principle of Maximum Entropy Production (Ã¢Â€ÂœMEPÃ¢Â€Â to the information-theoreticalÃ¢Â€ÂœMaxEntÃ¢Â€Â principle of scientific inference, together with conjectures from theoreticalphysics that there may be no fundamental causal laws but only probabilities for physicalprocesses, and from evolutionary theory that biological systems expand Ã¢Â€Âœthe adjacentpossibleÃ¢Â€Â as rapidly as possible, all lend credence to the proposition that probability shouldbe recognized as a fundamental physical motive. It is further proposed that spatial order andtemporal order are two aspects of the same thing, and that this is the essence of the secondlaw of thermodynamics.

1. Teaching Probability with the Support of the R Statistical Software

Science.gov (United States)

dos Santos Ferreira, Robson; Kataoka, Verônica Yumi; Karrer, Monica

2014-01-01

The objective of this paper is to discuss aspects of high school students' learning of probability in a context where they are supported by the statistical software R. We report on the application of a teaching experiment, constructed using the perspective of Gal's probabilistic literacy and Papert's constructionism. The results show improvement…

2. Data analysis & probability task sheets : grades pk-2

CERN Document Server

Cook, Tanya

2009-01-01

For grades PK-2, our Common Core State Standards-based resource meets the data analysis & probability concepts addressed by the NCTM standards and encourages your students to learn and review the concepts in unique ways. Each task sheet is organized around a central problem taken from real-life experiences of the students.

3. Structure theorems for game trees.

Science.gov (United States)

Govindan, Srihari; Wilson, Robert

2002-06-25

Kohlberg and Mertens [Kohlberg, E. & Mertens, J. (1986) Econometrica 54, 1003-1039] proved that the graph of the Nash equilibrium correspondence is homeomorphic to its domain when the domain is the space of payoffs in normal-form games. A counterexample disproves the analog for the equilibrium outcome correspondence over the space of payoffs in extensive-form games, but we prove an analog when the space of behavior strategies is perturbed so that every path in the game tree has nonzero probability. Without such perturbations, the graph is the closure of the union of a finite collection of its subsets, each diffeomorphic to a corresponding path-connected open subset of the space of payoffs. As an application, we construct an algorithm for computing equilibria of an extensive-form game with a perturbed strategy space, and thus approximate equilibria of the unperturbed game.

4. Seeing the Wood for the Trees: Applying the dual-memory system model to investigate expert teachers' observational skills in natural ecological learning environments

Science.gov (United States)

Stolpe, Karin; Björklund, Lars

2012-01-01

This study aims to investigate two expert ecology teachers' ability to attend to essential details in a complex environment during a field excursion, as well as how they teach this ability to their students. In applying a cognitive dual-memory system model for learning, we also suggest a rationale for their behaviour. The model implies two separate memory systems: the implicit, non-conscious, non-declarative system and the explicit, conscious, declarative system. This model provided the starting point for the research design. However, it was revised from the empirical findings supported by new theoretical insights. The teachers were video and audio recorded during their excursion and interviewed in a stimulated recall setting afterwards. The data were qualitatively analysed using the dual-memory system model. The results show that the teachers used holistic pattern recognition in their own identification of natural objects. However, teachers' main strategy to teach this ability is to give the students explicit rules or specific characteristics. According to the dual-memory system model the holistic pattern recognition is processed in the implicit memory system as a non-conscious match with earlier experienced situations. We suggest that this implicit pattern matching serves as an explanation for teachers' ecological and teaching observational skills. Another function of the implicit memory system is its ability to control automatic behaviour and non-conscious decision-making. The teachers offer the students firsthand sensory experiences which provide a prerequisite for the formation of implicit memories that provides a foundation for expertise.

5. Logic, Probability, and Human Reasoning

Science.gov (United States)

2015-01-01

accordingly suggest a way to integrate probability and deduction. The nature of deductive reasoning To be rational is to be able to make deductions...3–6] and they underlie mathematics, science, and tech- nology [7–10]. Plato claimed that emotions upset reason- ing. However, individuals in the grip...fundamental to human rationality . So, if counterexamples to its principal predictions occur, the theory will at least explain its own refutation

6. Probability Measures on Groups IX

CERN Document Server

1989-01-01

The latest in this series of Oberwolfach conferences focussed on the interplay between structural probability theory and various other areas of pure and applied mathematics such as Tauberian theory, infinite-dimensional rotation groups, central limit theorems, harmonizable processes, and spherical data. Thus it was attended by mathematicians whose research interests range from number theory to quantum physics in conjunction with structural properties of probabilistic phenomena. This volume contains 5 survey articles submitted on special invitation and 25 original research papers.

7. Probability matching and strategy availability

OpenAIRE

J. Koehler, Derek; Koehler, Derek J.; James, Greta

2010-01-01

Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought...

8. Learning

Directory of Open Access Journals (Sweden)

Mohsen Laabidi

2014-01-01

Full Text Available Nowadays learning technologies transformed educational systems with impressive progress of Information and Communication Technologies (ICT. Furthermore, when these technologies are available, affordable and accessible, they represent more than a transformation for people with disabilities. They represent real opportunities with access to an inclusive education and help to overcome the obstacles they met in classical educational systems. In this paper, we will cover basic concepts of e-accessibility, universal design and assistive technologies, with a special focus on accessible e-learning systems. Then, we will present recent research works conducted in our research Laboratory LaTICE toward the development of an accessible online learning environment for persons with disabilities from the design and specification step to the implementation. We will present, in particular, the accessible version “MoodleAcc+” of the well known e-learning platform Moodle as well as new elaborated generic models and a range of tools for authoring and evaluating accessible educational content.

9. Greek paideia and terms of probability

Directory of Open Access Journals (Sweden)

Fernando Leon Parada

2016-06-01

Full Text Available This paper addresses three aspects of the conceptual framework for a doctoral dissertation research in process in the field of Mathematics Education, in particular, in the subfield of teaching and learning basic concepts of Probability Theory at the College level. It intends to contrast, sustain and elucidate the central statement that the meanings of some of these basic terms used in Probability Theory were not formally defined by any specific theory but relate to primordial ideas developed in Western culture from Ancient Greek myths. The first aspect deals with the notion of uncertainty, with that Greek thinkers described several archaic gods and goddesses of Destiny, like Parcas and Moiras, often personified in the goddess Tyche—Fortuna for the Romans—, as regarded in Werner Jaeger’s “Paideia”. The second aspect treats the idea of hazard from two different approaches: the first approach deals with hazard, denoted by Plato with the already demythologized term ‘tyche’ from the viewpoint of innate knowledge, as Jaeger points out. The second approach deals with hazard from a perspective that could be called “phenomenological”, from which Aristotle attempted to articulate uncertainty with a discourse based on the hypothesis of causality. The term ‘causal’ was opposed both to ‘casual’ and to ‘spontaneous’ (as used in the expression “spontaneous generation”, attributing uncertainty to ignorance of the future, thus respecting causal flow. The third aspect treated in the paper refers to some definitions and etymologies of some other modern words that have become technical terms in current Probability Theory, confirming the above-mentioned main proposition of this paper.

10. A recursive algorithm for trees and forests

OpenAIRE

Guo, Song; Guo, Victor J. W.

2017-01-01

Trees or rooted trees have been generously studied in the literature. A forest is a set of trees or rooted trees. Here we give recurrence relations between the number of some kind of rooted forest with $k$ roots and that with $k+1$ roots on $\\{1,2,\\ldots,n\\}$. Classical formulas for counting various trees such as rooted trees, bipartite trees, tripartite trees, plane trees, $k$-ary plane trees, $k$-edge colored trees follow immediately from our recursive relations.

11. A Lakatosian Encounter with Probability

Science.gov (United States)

Chick, Helen

2010-01-01

There is much to be learned and pondered by reading "Proofs and Refutations" by Imre Lakatos (Lakatos, 1976). It highlights the importance of mathematical definitions, and how definitions evolve to capture the essence of the object they are defining. It also provides an exhilarating encounter with the ups and downs of the mathematical reasoning…

12. Phylogenetic trees in bioinformatics

Energy Technology Data Exchange (ETDEWEB)

Burr, Tom L [Los Alamos National Laboratory

2008-01-01

Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

13. Skewed Binary Search Trees

DEFF Research Database (Denmark)

Brodal, Gerth Stølting; Moruz, Gabriel

2006-01-01

It is well-known that to minimize the number of comparisons a binary search tree should be perfectly balanced. Previous work has shown that a dominating factor over the running time for a search is the number of cache faults performed, and that an appropriate memory layout of a binary search tree...... can reduce the number of cache faults by several hundred percent. Motivated by the fact that during a search branching to the left or right at a node does not necessarily have the same cost, e.g. because of branch prediction schemes, we in this paper study the class of skewed binary search trees....... For all nodes in a skewed binary search tree the ratio between the size of the left subtree and the size of the tree is a fixed constant (a ratio of 1/2 gives perfect balanced trees). In this paper we present an experimental study of various memory layouts of static skewed binary search trees, where each...

14. The gravity apple tree

International Nuclear Information System (INIS)

Aldama, Mariana Espinosa

2015-01-01

The gravity apple tree is a genealogical tree of the gravitation theories developed during the past century. The graphic representation is full of information such as guides in heuristic principles, names of main proponents, dates and references for original articles (See under Supplementary Data for the graphic representation). This visual presentation and its particular classification allows a quick synthetic view for a plurality of theories, many of them well validated in the Solar System domain. Its diachronic structure organizes information in a shape of a tree following similarities through a formal concept analysis. It can be used for educational purposes or as a tool for philosophical discussion. (paper)

15. Visualizing phylogenetic tree landscapes.

Science.gov (United States)

Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A

2017-02-02

Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D

16. [Biometric bases: basic concepts of probability calculation].

Science.gov (United States)

Dinya, E

1998-04-26

The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

17. PL-MOD: a computer code for modular fault tree analysis and evaluation

International Nuclear Information System (INIS)

Olmos, J.; Wolf, L.

1978-01-01

The computer code PL-MOD has been developed to implement the modular methodology to fault tree analysis. In the modular approach, fault tree structures are characterized by recursively relating the top tree event to all basic event inputs through a set of equations, each defining an independent modular event for the tree. The advantages of tree modularization lie in that it is a more compact representation than the minimal cut-set description and in that it is well suited for fault tree quantification because of its recursive form. In its present version, PL-MOD modularizes fault trees and evaluates top and intermediate event failure probabilities, as well as basic component and modular event importance measures, in a very efficient way. Thus, its execution time for the modularization and quantification of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using the computer code MOCUS

18. Probability for Weather and Climate

Science.gov (United States)

Smith, L. A.

2013-12-01

Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of

19. Thinning of Tree Stands in the Arctic Zone of Krasnoyarsk Territory With Different Ecological Conditions

Directory of Open Access Journals (Sweden)

V. I. Polyakov

2014-10-01

Full Text Available In 2001 six permanent sample plots (PSP were established in forest stands differing in degrees of damage by pollution from the Norilsk industrial region. In 2004 the second forest inventory was carried out at these PSP for evaluation of pollutant impacts on stand condition changes. During both inventory procedures the vigor state of every tree was visually categorized according to 6-points scale of «Forest health regulations in Russian Federation». The changeover of tree into fall was also taken into account. Two types of Markov’s models simulating thinning process in tree stands within different ecological conditions has been developed: 1 based on assessment for probability of tree survival during three years; 2 in terms of evaluation of matrix for probability on change of vigor state category in the same period. The reconstruction of tree mortality from 1979 after industrial complex «Nadezda» setting into operation was realized on the basis of probability estimation of dead standing trees conservation during three years observed. The forecast of situation was carried out up to 2030. Using logistic regression the probability of tree survival was established depending on four factors: degree of tree damage by pollutants, tree species, stand location in relief and tree age. The acquired results make it possible to single out an impact of pollutants to tree stands’ resistance from other factors. There was revealed the percent of tree fall, resulted by pollution. The evaluation scale of SO2 gas resistance of tree species was constructed: birch, spruce, larch. Larch showed the highest percent of fall because of pollution.

20. Probability, Statistics, and Stochastic Processes

CERN Document Server

Olofsson, Peter

2012-01-01

This book provides a unique and balanced approach to probability, statistics, and stochastic processes.   Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area.  The Second Edition features new coverage of analysis of variance (ANOVA), consistency and efficiency of estimators, asymptotic theory for maximum likelihood estimators, empirical distribution function and the Kolmogorov-Smirnov test, general linear models, multiple comparisons, Markov chain Monte Carlo (MCMC), Brownian motion, martingales, and

1. Sensitivity analysis using probability bounding

International Nuclear Information System (INIS)

Ferson, Scott; Troy Tucker, W.

2006-01-01

Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

2. Tree-growth analyses to estimate tree species' drought tolerance

NARCIS (Netherlands)

Eilmann, B.; Rigling, A.

2012-01-01

Climate change is challenging forestry management and practices. Among other things, tree species with the ability to cope with more extreme climate conditions have to be identified. However, while environmental factors may severely limit tree growth or even cause tree death, assessing a tree

3. Big trees, old trees, and growth factor tables

Science.gov (United States)

Kevin T. Smith

2018-01-01

The potential for a tree to reach a great size and to live a long life frequently captures the public's imagination. Sometimes the desire to know the age of an impressively large tree is simple curiosity. For others, the date-of-tree establishment can make a big diff erence for management, particularly for trees at historic sites or those mentioned in property...

4. A bijection between phylogenetic trees and plane oriented recursive trees

OpenAIRE

Prodinger, Helmut

2017-01-01

Phylogenetic trees are binary nonplanar trees with labelled leaves, and plane oriented recursive trees are planar trees with an increasing labelling. Both families are enumerated by double factorials. A bijection is constructed, using the respective representations a 2-partitions and trapezoidal words.

5. A Suffix Tree Or Not a Suffix Tree?

DEFF Research Database (Denmark)

Starikovskaya, Tatiana; Vildhøj, Hjalte Wedel

2015-01-01

In this paper we study the structure of suffix trees. Given an unlabeled tree r on n nodes and suffix links of its internal nodes, we ask the question “Is r a suffix tree?”, i.e., is there a string S whose suffix tree has the same topological structure as r? We place no restrictions on S, in part...

6. Probabilistic Properties of Rectilinear Steiner Minimal Trees

Directory of Open Access Journals (Sweden)

V. N. Salnikov

2015-01-01

Full Text Available This work concerns the properties of Steiner minimal trees for the manhattan plane in the context of introducing a probability measure. This problem is important because exact algorithms to solve the Steiner problem are computationally expensive (NP-hard and the solution (especially in the case of big number of points to be connected has a diversity of practical applications. That is why the work considers a possibility to rank the possible topologies of the minimal trees with respect to a probability of their usage. For this, the known facts about the structural properties of minimal trees for selected metrics have been analyzed to see their usefulness for the problem in question. For the small amount of boundary (fixed vertices, the paper offers a way to introduce a probability measure as a corollary of proved theorem about some structural properties of the minimal trees.This work is considered to further the previous similar activity concerning a problem of searching for minimal fillings, and it is a door opener to the more general (complicated task. The stated method demonstrates the possibility to reach the final result analytically, which gives a chance of its applicability to the case of the bigger number of boundary vertices (probably, with the use of computer engineering.The introducing definition of an essential Steiner point allowed a considerable restriction of the ambiguity of initial problem solution and, at the same time, comparison of such an approach with more classical works in the field concerned. The paper also lists main barriers of classical approaches, preventing their use for the task of introducing a probability measure.In prospect, application areas of the described method are expected to be wider both in terms of system enlargement (the number of boundary vertices and in terms of other metric spaces (the Euclidean case is of especial interest. The main interest is to find the classes of topologies with significantly

7. NLCD 2001 - Tree Canopy

Data.gov (United States)

Minnesota Department of Natural Resources — The National Land Cover Database 2001 tree canopy layer for Minnesota (mapping zones 39-42, 50-51) was produced through a cooperative project conducted by the...

8. Trees for future forests

DEFF Research Database (Denmark)

Lobo, Albin

Climate change creates new challenges in forest management. The increase in temperature may in the long run be beneficial for the forests in the northern latitudes, but the high rate at which climate change is predicted to proceed will make adaptation difficult because trees are long living sessile...... organisms. The aim of the present thesis is therefore to explore genetic resilience and phenotypic plasticity mechanisms that allows trees to adapt and evolve with changing climates. The thesis focus on the abiotic factors associated with climate change, especially raised temperatures and lack...... age of these tree species and the uncertainty around the pace and effect of climate, it remains an open question if the native populations can respond fast enough. Phenotypic plasticity through epigenetic regulation of spring phenology is found to be present in a tree species which might act...

9. Value tree analysis

International Nuclear Information System (INIS)

Keeney, R.; Renn, O.; Winterfeldt, D. von; Kotte, U.

1985-01-01

What are the targets and criteria on which national energy policy should be based. What priorities should be set, and how can different social interests be matched. To answer these questions, a new instrument of decision theory is presented which has been applied with good results to controversial political issues in the USA. The new technique is known under the name of value tree analysis. Members of important West German organisations (BDI, VDI, RWE, the Catholic and Protestant Church, Deutscher Naturschutzring, and ecological research institutions) were asked about the goals of their organisations. These goals were then ordered systematically and arranged in a hierarchical tree structure. The value trees of different groups can be combined into a catalogue of social criteria of acceptability and policy assessment. The authors describe the philosophy and methodology of value tree analysis and give an outline of its application in the development of a socially acceptable energy policy. (orig.) [de

10. Lectures on probability and statistics

International Nuclear Information System (INIS)

Yost, G.P.

1984-09-01

These notes are based on a set of statistics lectures delivered at Imperial College to the first-year postgraduate students in High Energy Physics. They are designed for the professional experimental scientist. We begin with the fundamentals of probability theory, in which one makes statements about the set of possible outcomes of an experiment, based upon a complete a priori understanding of the experiment. For example, in a roll of a set of (fair) dice, one understands a priori that any given side of each die is equally likely to turn up. From that, we can calculate the probability of any specified outcome. We finish with the inverse problem, statistics. Here, one begins with a set of actual data (e.g., the outcomes of a number of rolls of the dice), and attempts to make inferences about the state of nature which gave those data (e.g., the likelihood of seeing any given side of any given die turn up). This is a much more difficult problem, of course, and one's solutions often turn out to be unsatisfactory in one respect or another

11. Multiscale singularity trees

DEFF Research Database (Denmark)

Somchaipeng, Kerawit; Sporring, Jon; Johansen, Peter

2007-01-01

We propose MultiScale Singularity Trees (MSSTs) as a structure to represent images, and we propose an algorithm for image comparison based on comparing MSSTs. The algorithm is tested on 3 public image databases and compared to 2 state-of-theart methods. We conclude that the computational complexity...... of our algorithm only allows for the comparison of small trees, and that the results of our method are comparable with state-of-the-art using much fewer parameters for image representation....

12. Type extension trees

DEFF Research Database (Denmark)

Jaeger, Manfred

2006-01-01

We introduce type extension trees as a formal representation language for complex combinatorial features of relational data. Based on a very simple syntax this language provides a unified framework for expressing features as diverse as embedded subgraphs on the one hand, and marginal counts...... of attribute values on the other. We show by various examples how many existing relational data mining techniques can be expressed as the problem of constructing a type extension tree and a discriminant function....

13. Computer aided fault tree synthesis

International Nuclear Information System (INIS)

Poucet, A.

1983-01-01

Nuclear as well as non-nuclear organisations are showing during the past few years a growing interest in the field of reliability analysis. This urges for the development of powerful, state of the art methods and computer codes for performing such analysis on complex systems. In this report an interactive, computer aided approach is discussed, based on the well known fault tree technique. The time consuming and difficut task of manually constructing a system model (one or more fault trees) is replaced by an efficient interactive procedure in which the flexibility and the learning process inherent to the manual approach are combined with the accuracy in the modelling and the speed of the fully automatical approach. The method presented is based upon the use of a library containing component models. The possibility of setting up a standard library of models of general use and the link with a data collection system are discussed. The method has been implemented in the CAFTS-SALP software package which is described shortly in the report

14. Tree felling 2014

CERN Multimedia

2014-01-01

With a view to creating new landscapes and making its population of trees safer and healthier, this winter CERN will complete the tree-felling campaign started in 2010.   Tree felling will take place between 15 and 22 November on the Swiss part of the Meyrin site. This work is being carried out above all for safety reasons. The trees to be cut down are at risk of falling as they are too old and too tall to withstand the wind. In addition, the roots of poplar trees are very powerful and spread widely, potentially damaging underground networks, pavements and roadways. Compensatory tree planting campaigns will take place in the future, subject to the availability of funding, with the aim of creating coherent landscapes while also respecting the functional constraints of the site. These matters are being considered in close collaboration with the Geneva nature and countryside directorate (Direction générale de la nature et du paysage, DGNP). GS-SE Group

15. Benefit-based tree valuation

Science.gov (United States)

E.G. McPherson

2007-01-01

Benefit-based tree valuation provides alternative estimates of the fair and reasonable value of trees while illustrating the relative contribution of different benefit types. This study compared estimates of tree value obtained using cost- and benefit-based approaches. The cost-based approach used the Council of Landscape and Tree Appraisers trunk formula method, and...

16. Attack Trees with Sequential Conjunction

NARCIS (Netherlands)

Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

2015-01-01

We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

17. The probability and the management of human error

International Nuclear Information System (INIS)

Dufey, R.B.; Saull, J.W.

2004-01-01

Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

18. Undergraduate Students’ Initial Ability in Understanding Phylogenetic Tree

Science.gov (United States)

Sa'adah, S.; Hidayat, T.; Sudargo, Fransisca

2017-04-01

The Phylogenetic tree is a visual representation depicts a hypothesis about the evolutionary relationship among taxa. Evolutionary experts use this representation to evaluate the evidence for evolution. The phylogenetic tree is currently growing for many disciplines in biology. Consequently, learning about the phylogenetic tree has become an important part of biological education and an interesting area of biology education research. Skill to understanding and reasoning of the phylogenetic tree, (called tree thinking) is an important skill for biology students. However, research showed many students have difficulty in interpreting, constructing, and comparing among the phylogenetic tree, as well as experiencing a misconception in the understanding of the phylogenetic tree. Students are often not taught how to reason about evolutionary relationship depicted in the diagram. Students are also not provided with information about the underlying theory and process of phylogenetic. This study aims to investigate the initial ability of undergraduate students in understanding and reasoning of the phylogenetic tree. The research method is the descriptive method. Students are given multiple choice questions and an essay that representative by tree thinking elements. Each correct answer made percentages. Each student is also given questionnaires. The results showed that the undergraduate students’ initial ability in understanding and reasoning phylogenetic tree is low. Many students are not able to answer questions about the phylogenetic tree. Only 19 % undergraduate student who answered correctly on indicator evaluate the evolutionary relationship among taxa, 25% undergraduate student who answered correctly on indicator applying concepts of the clade, 17% undergraduate student who answered correctly on indicator determines the character evolution, and only a few undergraduate student who can construct the phylogenetic tree.

19. Wind-Induced Reconfigurations in Flexible Branched Trees

Science.gov (United States)

Ojo, Oluwafemi; Shoele, Kourosh

2017-11-01

Wind induced stresses are the major mechanical cause of failure in trees. We know that the branching mechanism has an important effect on the stress distribution and stability of a tree in the wind. Eloy in PRL 2011, showed that Leonardo da Vinci's original observation which states the total cross section of branches is conserved across branching nodes is the best configuration for resisting wind-induced fracture in rigid trees. However, prediction of the fracture risk and pattern of a tree is also a function of their reconfiguration capabilities and how they mitigate large wind-induced stresses. In this studies through developing an efficient numerical simulation of flexible branched trees, we explore the role of the tree flexibility on the optimal branching. Our results show that the probability of a tree breaking at any point depends on both the cross-section changes in the branching nodes and the level of tree flexibility. It is found that the branching mechanism based on Leonardo da Vinci's original observation leads to a uniform stress distribution over a wide range of flexibilities but the pattern changes for more flexible systems.

20. Use of fault and decision tree analyses to protect against industrial sabotage

International Nuclear Information System (INIS)

Fullwood, R.R.; Erdmann, R.C.

1975-01-01

Fault tree and decision tree analyses provide systematic bases for evaluation of safety systems and procedures. Heuristically, this paper shows applications of these methods for industrial sabotage analysis at a reprocessing plant. Fault trees constructed by ''leak path'' analysis for completeness through path inventory. The escape fault tree is readily developed by this method and using the reciprocal character of the trees, the attack fault tree is constructed. After construction, the events on the fault tree are corrected for their nonreciprocal character. The fault trees are algebraically solved and the protection that is afforded is ranked by the number of barriers that must be penetrated. No attempt is made to assess the barrier penetration probabilities or penetration time duration. Event trees are useful for dynamic plant protection analysis through their time-sequencing character. To illustrate their usefulness, a simple attack scenario is devised and event-tree analyzed. Two saboteur success paths and 21 failure paths are found. This example clearly shows the event tree usefulness for concisely presenting the time sequencing of key decision points. However, event trees have the disadvantage of being scenario dependent, therefore requiring a separate event tree for each scenario