WorldWideScience

Sample records for probabilistic-based genetic assignment

  1. RNA-PAIRS: RNA probabilistic assignment of imino resonance shifts

    International Nuclear Information System (INIS)

    Bahrami, Arash; Clos, Lawrence J.; Markley, John L.; Butcher, Samuel E.; Eghbalnia, Hamid R.

    2012-01-01

    The significant biological role of RNA has further highlighted the need for improving the accuracy, efficiency and the reach of methods for investigating RNA structure and function. Nuclear magnetic resonance (NMR) spectroscopy is vital to furthering the goals of RNA structural biology because of its distinctive capabilities. However, the dispersion pattern in the NMR spectra of RNA makes automated resonance assignment, a key step in NMR investigation of biomolecules, remarkably challenging. Herein we present RNA Probabilistic Assignment of Imino Resonance Shifts (RNA-PAIRS), a method for the automated assignment of RNA imino resonances with synchronized verification and correction of predicted secondary structure. RNA-PAIRS represents an advance in modeling the assignment paradigm because it seeds the probabilistic network for assignment with experimental NMR data, and predicted RNA secondary structure, simultaneously and from the start. Subsequently, RNA-PAIRS sets in motion a dynamic network that reverberates between predictions and experimental evidence in order to reconcile and rectify resonance assignments and secondary structure information. The procedure is halted when assignments and base-parings are deemed to be most consistent with observed crosspeaks. The current implementation of RNA-PAIRS uses an initial peak list derived from proton-nitrogen heteronuclear multiple quantum correlation ( 1 H– 15 N 2D HMQC) and proton–proton nuclear Overhauser enhancement spectroscopy ( 1 H– 1 H 2D NOESY) experiments. We have evaluated the performance of RNA-PAIRS by using it to analyze NMR datasets from 26 previously studied RNAs, including a 111-nucleotide complex. For moderately sized RNA molecules, and over a range of comparatively complex structural motifs, the average assignment accuracy exceeds 90%, while the average base pair prediction accuracy exceeded 93%. RNA-PAIRS yielded accurate assignments and base pairings consistent with imino resonances for a

  2. RNA-PAIRS: RNA probabilistic assignment of imino resonance shifts

    Energy Technology Data Exchange (ETDEWEB)

    Bahrami, Arash; Clos, Lawrence J.; Markley, John L.; Butcher, Samuel E. [National Magnetic Resonance Facility at Madison (United States); Eghbalnia, Hamid R., E-mail: eghbalhd@uc.edu [University of Cincinnati, Department of Molecular and Cellular Physiology (United States)

    2012-04-15

    The significant biological role of RNA has further highlighted the need for improving the accuracy, efficiency and the reach of methods for investigating RNA structure and function. Nuclear magnetic resonance (NMR) spectroscopy is vital to furthering the goals of RNA structural biology because of its distinctive capabilities. However, the dispersion pattern in the NMR spectra of RNA makes automated resonance assignment, a key step in NMR investigation of biomolecules, remarkably challenging. Herein we present RNA Probabilistic Assignment of Imino Resonance Shifts (RNA-PAIRS), a method for the automated assignment of RNA imino resonances with synchronized verification and correction of predicted secondary structure. RNA-PAIRS represents an advance in modeling the assignment paradigm because it seeds the probabilistic network for assignment with experimental NMR data, and predicted RNA secondary structure, simultaneously and from the start. Subsequently, RNA-PAIRS sets in motion a dynamic network that reverberates between predictions and experimental evidence in order to reconcile and rectify resonance assignments and secondary structure information. The procedure is halted when assignments and base-parings are deemed to be most consistent with observed crosspeaks. The current implementation of RNA-PAIRS uses an initial peak list derived from proton-nitrogen heteronuclear multiple quantum correlation ({sup 1}H-{sup 15}N 2D HMQC) and proton-proton nuclear Overhauser enhancement spectroscopy ({sup 1}H-{sup 1}H 2D NOESY) experiments. We have evaluated the performance of RNA-PAIRS by using it to analyze NMR datasets from 26 previously studied RNAs, including a 111-nucleotide complex. For moderately sized RNA molecules, and over a range of comparatively complex structural motifs, the average assignment accuracy exceeds 90%, while the average base pair prediction accuracy exceeded 93%. RNA-PAIRS yielded accurate assignments and base pairings consistent with imino

  3. Probabilistic validation of protein NMR chemical shift assignments

    International Nuclear Information System (INIS)

    Dashti, Hesam; Tonelli, Marco; Lee, Woonghee; Westler, William M.; Cornilescu, Gabriel; Ulrich, Eldon L.; Markley, John L.

    2016-01-01

    Data validation plays an important role in ensuring the reliability and reproducibility of studies. NMR investigations of the functional properties, dynamics, chemical kinetics, and structures of proteins depend critically on the correctness of chemical shift assignments. We present a novel probabilistic method named ARECA for validating chemical shift assignments that relies on the nuclear Overhauser effect data. ARECA has been evaluated through its application to 26 case studies and has been shown to be complementary to, and usually more reliable than, approaches based on chemical shift databases. ARECA is available online at http://areca.nmrfam.wisc.edu/ http://areca.nmrfam.wisc.edu/

  4. Probabilistic validation of protein NMR chemical shift assignments

    Energy Technology Data Exchange (ETDEWEB)

    Dashti, Hesam [University of Wisconsin-Madison, Graduate Program in Biophysics, Biochemistry Department (United States); Tonelli, Marco; Lee, Woonghee; Westler, William M.; Cornilescu, Gabriel [University of Wisconsin-Madison, Biochemistry Department, National Magnetic Resonance Facility at Madison (United States); Ulrich, Eldon L. [University of Wisconsin-Madison, BioMagResBank, Biochemistry Department (United States); Markley, John L., E-mail: markley@nmrfam.wisc.edu, E-mail: jmarkley@wisc.edu [University of Wisconsin-Madison, Biochemistry Department, National Magnetic Resonance Facility at Madison (United States)

    2016-01-15

    Data validation plays an important role in ensuring the reliability and reproducibility of studies. NMR investigations of the functional properties, dynamics, chemical kinetics, and structures of proteins depend critically on the correctness of chemical shift assignments. We present a novel probabilistic method named ARECA for validating chemical shift assignments that relies on the nuclear Overhauser effect data. ARECA has been evaluated through its application to 26 case studies and has been shown to be complementary to, and usually more reliable than, approaches based on chemical shift databases. ARECA is available online at http://areca.nmrfam.wisc.edu/ http://areca.nmrfam.wisc.edu/.

  5. Aging and a genetic KIBRA polymorphism interactively affect feedback- and observation-based probabilistic classification learning.

    Science.gov (United States)

    Schuck, Nicolas W; Petok, Jessica R; Meeter, Martijn; Schjeide, Brit-Maren M; Schröder, Julia; Bertram, Lars; Gluck, Mark A; Li, Shu-Chen

    2018-01-01

    Probabilistic category learning involves complex interactions between the hippocampus and striatum that may depend on whether acquisition occurs via feedback or observation. Little is known about how healthy aging affects these processes. We tested whether age-related behavioral differences in probabilistic category learning from feedback or observation depend on a genetic factor known to influence individual differences in hippocampal function, the KIBRA gene (single nucleotide polymorphism rs17070145). Results showed comparable age-related performance impairments in observational as well as feedback-based learning. Moreover, genetic analyses indicated an age-related interactive effect of KIBRA on learning: among older adults, the beneficial T-allele was positively associated with learning from feedback, but negatively with learning from observation. In younger adults, no effects of KIBRA were found. Our results add behavioral genetic evidence to emerging data showing age-related differences in how neural resources relate to memory functions, namely that hippocampal and striatal contributions to probabilistic category learning may vary with age. Our findings highlight the effects genetic factors can have on differential age-related decline of different memory functions. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. A probabilistic approach for validating protein NMR chemical shift assignments

    International Nuclear Information System (INIS)

    Wang Bowei; Wang, Yunjun; Wishart, David S.

    2010-01-01

    It has been estimated that more than 20% of the proteins in the BMRB are improperly referenced and that about 1% of all chemical shift assignments are mis-assigned. These statistics also reflect the likelihood that any newly assigned protein will have shift assignment or shift referencing errors. The relatively high frequency of these errors continues to be a concern for the biomolecular NMR community. While several programs do exist to detect and/or correct chemical shift mis-referencing or chemical shift mis-assignments, most can only do one, or the other. The one program (SHIFTCOR) that is capable of handling both chemical shift mis-referencing and mis-assignments, requires the 3D structure coordinates of the target protein. Given that chemical shift mis-assignments and chemical shift re-referencing issues should ideally be addressed prior to 3D structure determination, there is a clear need to develop a structure-independent approach. Here, we present a new structure-independent protocol, which is based on using residue-specific and secondary structure-specific chemical shift distributions calculated over small (3-6 residue) fragments to identify mis-assigned resonances. The method is also able to identify and re-reference mis-referenced chemical shift assignments. Comparisons against existing re-referencing or mis-assignment detection programs show that the method is as good or superior to existing approaches. The protocol described here has been implemented into a freely available Java program called 'Probabilistic Approach for protein Nmr Assignment Validation (PANAV)' and as a web server (http://redpoll.pharmacy.ualberta.ca/PANAVhttp://redpoll.pharmacy.ualberta.ca/PANAV) which can be used to validate and/or correct as well as re-reference assigned protein chemical shifts.

  7. Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems

    OpenAIRE

    Vorozheikin, A.; Gonchar, T.; Panfilov, I.; Sopov, E.; Sopov, S.

    2009-01-01

    A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.

  8. Probabilistic Identification of Spin Systems and their Assignments including Coil-Helix Inference as Output (PISTACHIO)

    Energy Technology Data Exchange (ETDEWEB)

    Eghbalnia, Hamid R., E-mail: eghbalni@nmrfam.wisc.edu; Bahrami, Arash; Wang, Liya [National Magnetic Resonance Facility at Madison, Biochemistry Department (United States); Assadi, Amir [University of Wisconsin-Madison, Mathematics Department (United States); Markley, John L [National Magnetic Resonance Facility at Madison, Biochemistry Department (United States)

    2005-07-15

    We present a novel automated strategy (PISTACHIO) for the probabilistic assignment of backbone and sidechain chemical shifts in proteins. The algorithm uses peak lists derived from various NMR experiments as input and provides as output ranked lists of assignments for all signals recognized in the input data as constituting spin systems. PISTACHIO was evaluated by comparing its performance with raw peak-picked data from 15 proteins ranging from 54 to 300 residues; the results were compared with those achieved by experts analyzing the same datasets by hand. As scored against the best available independent assignments for these proteins, the first-ranked PISTACHIO assignments were 80-100% correct for backbone signals and 75-95% correct for sidechain signals. The independent assignments benefited, in a number of cases, from structural data (e.g. from NOESY spectra) that were unavailable to PISTACHIO. Any number of datasets in any combination can serve as input. Thus PISTACHIO can be used as datasets are collected to ascertain the current extent of secure assignments, to identify residues with low assignment probability, and to suggest the types of additional data needed to remove ambiguities. The current implementation of PISTACHIO, which is available from a server on the Internet, supports input data from 15 standard double- and triple-resonance experiments. The software can readily accommodate additional types of experiments, including data from selectively labeled samples. The assignment probabilities can be carried forward and refined in subsequent steps leading to a structure. The performance of PISTACHIO showed no direct dependence on protein size, but correlated instead with data quality (completeness and signal-to-noise). PISTACHIO represents one component of a comprehensive probabilistic approach we are developing for the collection and analysis of protein NMR data.

  9. Probabilistic Identification of Spin Systems and their Assignments including Coil-Helix Inference as Output (PISTACHIO)

    International Nuclear Information System (INIS)

    Eghbalnia, Hamid R.; Bahrami, Arash; Wang, Liya; Assadi, Amir; Markley, John L.

    2005-01-01

    We present a novel automated strategy (PISTACHIO) for the probabilistic assignment of backbone and sidechain chemical shifts in proteins. The algorithm uses peak lists derived from various NMR experiments as input and provides as output ranked lists of assignments for all signals recognized in the input data as constituting spin systems. PISTACHIO was evaluated by comparing its performance with raw peak-picked data from 15 proteins ranging from 54 to 300 residues; the results were compared with those achieved by experts analyzing the same datasets by hand. As scored against the best available independent assignments for these proteins, the first-ranked PISTACHIO assignments were 80-100% correct for backbone signals and 75-95% correct for sidechain signals. The independent assignments benefited, in a number of cases, from structural data (e.g. from NOESY spectra) that were unavailable to PISTACHIO. Any number of datasets in any combination can serve as input. Thus PISTACHIO can be used as datasets are collected to ascertain the current extent of secure assignments, to identify residues with low assignment probability, and to suggest the types of additional data needed to remove ambiguities. The current implementation of PISTACHIO, which is available from a server on the Internet, supports input data from 15 standard double- and triple-resonance experiments. The software can readily accommodate additional types of experiments, including data from selectively labeled samples. The assignment probabilities can be carried forward and refined in subsequent steps leading to a structure. The performance of PISTACHIO showed no direct dependence on protein size, but correlated instead with data quality (completeness and signal-to-noise). PISTACHIO represents one component of a comprehensive probabilistic approach we are developing for the collection and analysis of protein NMR data

  10. Probabilistic Bandwidth Assignment in Wireless Sensor Networks

    OpenAIRE

    Khan , Dawood; Nefzi , Bilel; Santinelli , Luca; Song , Ye-Qiong

    2012-01-01

    International audience; With this paper we offer an insight in designing and analyzing wireless sensor networks in a versatile manner. Our framework applies probabilistic and component-based design principles for the wireless sensor network modeling and consequently analysis; while maintaining flexibility and accuracy. In particular, we address the problem of allocating and reconfiguring the available bandwidth. The framework has been successfully implemented in IEEE 802.15.4 using an Admissi...

  11. Dynamic traffic assignment : genetic algorithms approach

    Science.gov (United States)

    1997-01-01

    Real-time route guidance is a promising approach to alleviating congestion on the nations highways. A dynamic traffic assignment model is central to the development of guidance strategies. The artificial intelligence technique of genetic algorithm...

  12. Applications of random forest feature selection for fine-scale genetic population assignment.

    Science.gov (United States)

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  13. Genetics of traffic assignment models for strategic transport planning

    NARCIS (Netherlands)

    Bliemer, M.C.J.; Raadsen, M.P.H.; Brederode, L.J.N.; Bell, M.G.H.; Wismans, Luc Johannes Josephus; Smith, M.J.

    2016-01-01

    This paper presents a review and classification of traffic assignment models for strategic transport planning purposes by using concepts analogous to genetics in biology. Traffic assignment models share the same theoretical framework (DNA), but differ in capability (genes). We argue that all traffic

  14. Affective and cognitive factors influencing sensitivity to probabilistic information.

    Science.gov (United States)

    Tyszka, Tadeusz; Sawicki, Przemyslaw

    2011-11-01

    In study 1 different groups of female students were randomly assigned to one of four probabilistic information formats. Five different levels of probability of a genetic disease in an unborn child were presented to participants (within-subject factor). After the presentation of the probability level, participants were requested to indicate the acceptable level of pain they would tolerate to avoid the disease (in their unborn child), their subjective evaluation of the disease risk, and their subjective evaluation of being worried by this risk. The results of study 1 confirmed the hypothesis that an experience-based probability format decreases the subjective sense of worry about the disease, thus, presumably, weakening the tendency to overrate the probability of rare events. Study 2 showed that for the emotionally laden stimuli, the experience-based probability format resulted in higher sensitivity to probability variations than other formats of probabilistic information. These advantages of the experience-based probability format are interpreted in terms of two systems of information processing: the rational deliberative versus the affective experiential and the principle of stimulus-response compatibility. © 2011 Society for Risk Analysis.

  15. Assignment of functional activations to probabilistic cytoarchitectonic areas revisited.

    Science.gov (United States)

    Eickhoff, Simon B; Paus, Tomas; Caspers, Svenja; Grosbras, Marie-Helene; Evans, Alan C; Zilles, Karl; Amunts, Katrin

    2007-07-01

    Probabilistic cytoarchitectonic maps in standard reference space provide a powerful tool for the analysis of structure-function relationships in the human brain. While these microstructurally defined maps have already been successfully used in the analysis of somatosensory, motor or language functions, several conceptual issues in the analysis of structure-function relationships still demand further clarification. In this paper, we demonstrate the principle approaches for anatomical localisation of functional activations based on probabilistic cytoarchitectonic maps by exemplary analysis of an anterior parietal activation evoked by visual presentation of hand gestures. After consideration of the conceptual basis and implementation of volume or local maxima labelling, we comment on some potential interpretational difficulties, limitations and caveats that could be encountered. Extending and supplementing these methods, we then propose a supplementary approach for quantification of structure-function correspondences based on distribution analysis. This approach relates the cytoarchitectonic probabilities observed at a particular functionally defined location to the areal specific null distribution of probabilities across the whole brain (i.e., the full probability map). Importantly, this method avoids the need for a unique classification of voxels to a single cortical area and may increase the comparability between results obtained for different areas. Moreover, as distribution-based labelling quantifies the "central tendency" of an activation with respect to anatomical areas, it will, in combination with the established methods, allow an advanced characterisation of the anatomical substrates of functional activations. Finally, the advantages and disadvantages of the various methods are discussed, focussing on the question of which approach is most appropriate for a particular situation.

  16. Genetic programming for evolving due-date assignment models in job shop environments.

    Science.gov (United States)

    Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen

    2014-01-01

    Due-date assignment plays an important role in scheduling systems and strongly influences the delivery performance of job shops. Because of the stochastic and dynamic nature of job shops, the development of general due-date assignment models (DDAMs) is complicated. In this study, two genetic programming (GP) methods are proposed to evolve DDAMs for job shop environments. The experimental results show that the evolved DDAMs can make more accurate estimates than other existing dynamic DDAMs with promising reusability. In addition, the evolved operation-based DDAMs show better performance than the evolved DDAMs employing aggregate information of jobs and machines.

  17. A logic for inductive probabilistic reasoning

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2005-01-01

    Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from '70% of As are Bs" and "a is an A" infer...... that a is a B with probability 0.7. Direct inference is generalized by Jeffrey's rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system acting in a complex environment may have...... to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e., by inductive probabilistic reasoning. In this paper a formal framework...

  18. Wildlife forensic science: A review of genetic geographic origin assignment.

    Science.gov (United States)

    Ogden, Rob; Linacre, Adrian

    2015-09-01

    Wildlife forensic science has become a key means of enforcing legislation surrounding the illegal trade in protected and endangered species. A relatively new dimension to this area of forensic science is to determine the geographic origin of a seized sample. This review focuses on DNA testing, which relies on assignment of an unknown sample to its genetic population of origin. Key examples of this are the trade in timber, fish and ivory and these are used only to illustrate the large number of species for which this type of testing is potentially available. The role of mitochondrial and nuclear DNA markers is discussed, alongside a comparison of neutral markers with those exhibiting signatures of selection, which potentially offer much higher levels of assignment power to address specific questions. A review of assignment tests is presented along with detailed methods for evaluating error rates and considerations for marker selection. The availability and quality of reference data are of paramount importance to support assignment applications and ensure reliability of any conclusions drawn. The genetic methods discussed have been developed initially as investigative tools but comment is made regarding their use in courts. The potential to compliment DNA markers with elemental assays for greater assignment power is considered and finally recommendations are made for the future of this type of testing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling is the identification of relevant components to be presented to the user. In this paper, we introduce a probabilistic reasoning approach to this problem. Given a repository of shapes, our approach learns a probabilistic graphical model that encodes semantic and geometric relationships among shape components. The probabilistic model is used to present components that are semantically and stylistically compatible with the 3D model that is being assembled. Our experiments indicate that the probabilistic model increases the relevance of presented components. © 2011 ACM.

  20. An ontology-based nurse call management system (oNCS) with probabilistic priority assessment

    Science.gov (United States)

    2011-01-01

    Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1) the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2) the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS) was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves the assignment of nurses

  1. An ontology-based nurse call management system (oNCS with probabilistic priority assessment

    Directory of Open Access Journals (Sweden)

    Verhoeve Piet

    2011-02-01

    Full Text Available Abstract Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1 the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2 the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves

  2. Computational Aspects of Assigning Agents to a Line

    DEFF Research Database (Denmark)

    Aziz, Haris; Hougaard, Jens Leth; Moreno-Ternero, Juan D.

    2017-01-01

    -egalitarian assignments. The approach relies on an algorithm which is shown to be faster than general purpose algorithms for the assignment problem. We also extend the approach to probabilistic assignments and explore the computational features of existing, as well as new, methods for this setting....

  3. Computational aspects of assigning agents to a line

    DEFF Research Database (Denmark)

    Aziz, Haris; Hougaard, Jens Leth; Moreno-Ternero, Juan D.

    2017-01-01

    -egalitarian assignments. The approach relies on an algorithm which is shown to be faster than general purpose algorithms for the assignment problem. We also extend the approach to probabilistic assignments and explore the computational features of existing, as well as new, methods for this setting....

  4. Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.

    Science.gov (United States)

    Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong

    2015-11-01

    Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.

  5. Novel probabilistic models of spatial genetic ancestry with applications to stratification correction in genome-wide association studies.

    Science.gov (United States)

    Bhaskar, Anand; Javanmard, Adel; Courtade, Thomas A; Tse, David

    2017-03-15

    Genetic variation in human populations is influenced by geographic ancestry due to spatial locality in historical mating and migration patterns. Spatial population structure in genetic datasets has been traditionally analyzed using either model-free algorithms, such as principal components analysis (PCA) and multidimensional scaling, or using explicit spatial probabilistic models of allele frequency evolution. We develop a general probabilistic model and an associated inference algorithm that unify the model-based and data-driven approaches to visualizing and inferring population structure. Our spatial inference algorithm can also be effectively applied to the problem of population stratification in genome-wide association studies (GWAS), where hidden population structure can create fictitious associations when population ancestry is correlated with both the genotype and the trait. Our algorithm Geographic Ancestry Positioning (GAP) relates local genetic distances between samples to their spatial distances, and can be used for visually discerning population structure as well as accurately inferring the spatial origin of individuals on a two-dimensional continuum. On both simulated and several real datasets from diverse human populations, GAP exhibits substantially lower error in reconstructing spatial ancestry coordinates compared to PCA. We also develop an association test that uses the ancestry coordinates inferred by GAP to accurately account for ancestry-induced correlations in GWAS. Based on simulations and analysis of a dataset of 10 metabolic traits measured in a Northern Finland cohort, which is known to exhibit significant population structure, we find that our method has superior power to current approaches. Our software is available at https://github.com/anand-bhaskar/gap . abhaskar@stanford.edu or ajavanma@usc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved

  6. On revision of partially specified convex probabilistic belief bases

    CSIR Research Space (South Africa)

    Rens, G

    2016-08-01

    Full Text Available We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base...

  7. Probabilistic dual heuristic programming-based adaptive critic

    Science.gov (United States)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  8. Bayesian assignment of gene ontology terms to gene expression experiments

    Science.gov (United States)

    Sykacek, P.

    2012-01-01

    Motivation: Gene expression assays allow for genome scale analyses of molecular biological mechanisms. State-of-the-art data analysis provides lists of involved genes, either by calculating significance levels of mRNA abundance or by Bayesian assessments of gene activity. A common problem of such approaches is the difficulty of interpreting the biological implication of the resulting gene lists. This lead to an increased interest in methods for inferring high-level biological information. A common approach for representing high level information is by inferring gene ontology (GO) terms which may be attributed to the expression data experiment. Results: This article proposes a probabilistic model for GO term inference. Modelling assumes that gene annotations to GO terms are available and gene involvement in an experiment is represented by a posterior probabilities over gene-specific indicator variables. Such probability measures result from many Bayesian approaches for expression data analysis. The proposed model combines these indicator probabilities in a probabilistic fashion and provides a probabilistic GO term assignment as a result. Experiments on synthetic and microarray data suggest that advantages of the proposed probabilistic GO term inference over statistical test-based approaches are in particular evident for sparsely annotated GO terms and in situations of large uncertainty about gene activity. Provided that appropriate annotations exist, the proposed approach is easily applied to inferring other high level assignments like pathways. Availability: Source code under GPL license is available from the author. Contact: peter.sykacek@boku.ac.at PMID:22962488

  9. Bayesian assignment of gene ontology terms to gene expression experiments.

    Science.gov (United States)

    Sykacek, P

    2012-09-15

    Gene expression assays allow for genome scale analyses of molecular biological mechanisms. State-of-the-art data analysis provides lists of involved genes, either by calculating significance levels of mRNA abundance or by Bayesian assessments of gene activity. A common problem of such approaches is the difficulty of interpreting the biological implication of the resulting gene lists. This lead to an increased interest in methods for inferring high-level biological information. A common approach for representing high level information is by inferring gene ontology (GO) terms which may be attributed to the expression data experiment. This article proposes a probabilistic model for GO term inference. Modelling assumes that gene annotations to GO terms are available and gene involvement in an experiment is represented by a posterior probabilities over gene-specific indicator variables. Such probability measures result from many Bayesian approaches for expression data analysis. The proposed model combines these indicator probabilities in a probabilistic fashion and provides a probabilistic GO term assignment as a result. Experiments on synthetic and microarray data suggest that advantages of the proposed probabilistic GO term inference over statistical test-based approaches are in particular evident for sparsely annotated GO terms and in situations of large uncertainty about gene activity. Provided that appropriate annotations exist, the proposed approach is easily applied to inferring other high level assignments like pathways. Source code under GPL license is available from the author. peter.sykacek@boku.ac.at.

  10. Assignment of the 5HT7 receptor gene (HTR7) to chromosome 10q and exclusion of genetic linkage with Tourette syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Gelernter, J.; Rao, P.A.; Pauls, D.L. [Yale Univ. School of Medicine, West Haven, CT (United States)] [and others

    1995-03-20

    A novel serotonin receptor designated 5HT7 (genetic locus HTR7) was cloned in 1993. This receptor has interesting properties related to ligand affinity and CNS distribution that render HTR7 a very interesting candidate gene for neuropsychiatric disorders. We mapped this gene, first by physical methods and then by genetic linkage. First, we made a tentative assignment to chromosome 10, based on hybridization of an HTR7 probe to a Southern blot of DNA from somatic cell hybrids. We then identified a genetic polymorphism at the HTR7 locus. We identified one extended pedigree where the polymorphism segregated. Using the LEPED computer program for pairwise linkage analysis, we confirmed the assignment of the gene to chromosome 10, specifically 10q21-q24, based on a lod score of 5.37 at 0% recombination between HTR7 and D10S20 (a chromosome 10 reference marker). Finally, we excluded genetic linkage between this locus and Tourette syndrome under a reasonable set of assumptions. 15 refs., 1 fig., 1 tab.

  11. A Geometric Presentation of Probabilistic Satisfiability

    OpenAIRE

    Morales-Luna, Guillermo

    2010-01-01

    By considering probability distributions over the set of assignments the expected truth values assignment to propositional variables are extended through linear operators, and the expected truth values of the clauses at any given conjunctive form are also extended through linear maps. The probabilistic satisfiability problems are discussed in terms of the introduced linear extensions. The case of multiple truth values is also discussed.

  12. Probabilistic reasoning in data analysis.

    Science.gov (United States)

    Sirovich, Lawrence

    2011-09-20

    This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.

  13. A Probabilistic Short-Term Water Demand Forecasting Model Based on the Markov Chain

    Directory of Open Access Journals (Sweden)

    Francesca Gagliardi

    2017-07-01

    Full Text Available This paper proposes a short-term water demand forecasting method based on the use of the Markov chain. This method provides estimates of future demands by calculating probabilities that the future demand value will fall within pre-assigned intervals covering the expected total variability. More specifically, two models based on homogeneous and non-homogeneous Markov chains were developed and presented. These models, together with two benchmark models (based on artificial neural network and naïve methods, were applied to three real-life case studies for the purpose of forecasting the respective water demands from 1 to 24 h ahead. The results obtained show that the model based on a homogeneous Markov chain provides more accurate short-term forecasts than the one based on a non-homogeneous Markov chain, which is in line with the artificial neural network model. Both Markov chain models enable probabilistic information regarding the stochastic demand forecast to be easily obtained.

  14. A Probabilistic Design Methodology for a Turboshaft Engine Overall Performance Analysis

    Directory of Open Access Journals (Sweden)

    Min Chen

    2014-05-01

    Full Text Available In reality, the cumulative effect of the many uncertainties in engine component performance may stack up to affect the engine overall performance. This paper aims to quantify the impact of uncertainty in engine component performance on the overall performance of a turboshaft engine based on Monte-Carlo probabilistic design method. A novel probabilistic model of turboshaft engine, consisting of a Monte-Carlo simulation generator, a traditional nonlinear turboshaft engine model, and a probability statistical model, was implemented to predict this impact. One of the fundamental results shown herein is that uncertainty in component performance has a significant impact on the engine overall performance prediction. This paper also shows that, taking into consideration the uncertainties in component performance, the turbine entry temperature and overall pressure ratio based on the probabilistic design method should increase by 0.76% and 8.33%, respectively, compared with the ones of deterministic design method. The comparison shows that the probabilistic approach provides a more credible and reliable way to assign the design space for a target engine overall performance.

  15. Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment

    Science.gov (United States)

    Karimzadehgan, Maryam; Zhai, ChengXiang

    2011-01-01

    Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970

  16. Strategic Team AI Path Plans: Probabilistic Pathfinding

    Directory of Open Access Journals (Sweden)

    Tng C. H. John

    2008-01-01

    Full Text Available This paper proposes a novel method to generate strategic team AI pathfinding plans for computer games and simulations using probabilistic pathfinding. This method is inspired by genetic algorithms (Russell and Norvig, 2002, in that, a fitness function is used to test the quality of the path plans. The method generates high-quality path plans by eliminating the low-quality ones. The path plans are generated by probabilistic pathfinding, and the elimination is done by a fitness test of the path plans. This path plan generation method has the ability to generate variation or different high-quality paths, which is desired for games to increase replay values. This work is an extension of our earlier work on team AI: probabilistic pathfinding (John et al., 2006. We explore ways to combine probabilistic pathfinding and genetic algorithm to create a new method to generate strategic team AI pathfinding plans.

  17. Non-probabilistic defect assessment for structures with cracks based on interval model

    International Nuclear Information System (INIS)

    Dai, Qiao; Zhou, Changyu; Peng, Jian; Chen, Xiangwei; He, Xiaohua

    2013-01-01

    Highlights: • Non-probabilistic approach is introduced to defect assessment. • Definition and establishment of IFAC are put forward. • Determination of assessment rectangle is proposed. • Solution of non-probabilistic reliability index is presented. -- Abstract: Traditional defect assessment methods conservatively treat uncertainty of parameters as safety factors, while the probabilistic method is based on the clear understanding of detailed statistical information of parameters. In this paper, the non-probabilistic approach is introduced to the failure assessment diagram (FAD) to propose a non-probabilistic defect assessment method for structures with cracks. This novel defect assessment method contains three critical processes: establishment of the interval failure assessment curve (IFAC), determination of the assessment rectangle, and solution of the non-probabilistic reliability degree. Based on the interval theory, uncertain parameters such as crack sizes, material properties and loads are considered as interval variables. As a result, the failure assessment curve (FAC) will vary in a certain range, which is defined as IFAC. And the assessment point will vary within a rectangle zone which is defined as an assessment rectangle. Based on the interval model, the establishment of IFAC and the determination of the assessment rectangle are presented. Then according to the interval possibility degree method, the non-probabilistic reliability degree of IFAC can be determined. Meanwhile, in order to clearly introduce the non-probabilistic defect assessment method, a numerical example for the assessment of a pipe with crack is given. In addition, the assessment result of the proposed method is compared with that of the traditional probabilistic method, which confirms that this non-probabilistic defect assessment can reasonably resolve the practical problem with interval variables

  18. Non-probabilistic defect assessment for structures with cracks based on interval model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Qiao; Zhou, Changyu, E-mail: changyu_zhou@163.com; Peng, Jian; Chen, Xiangwei; He, Xiaohua

    2013-09-15

    Highlights: • Non-probabilistic approach is introduced to defect assessment. • Definition and establishment of IFAC are put forward. • Determination of assessment rectangle is proposed. • Solution of non-probabilistic reliability index is presented. -- Abstract: Traditional defect assessment methods conservatively treat uncertainty of parameters as safety factors, while the probabilistic method is based on the clear understanding of detailed statistical information of parameters. In this paper, the non-probabilistic approach is introduced to the failure assessment diagram (FAD) to propose a non-probabilistic defect assessment method for structures with cracks. This novel defect assessment method contains three critical processes: establishment of the interval failure assessment curve (IFAC), determination of the assessment rectangle, and solution of the non-probabilistic reliability degree. Based on the interval theory, uncertain parameters such as crack sizes, material properties and loads are considered as interval variables. As a result, the failure assessment curve (FAC) will vary in a certain range, which is defined as IFAC. And the assessment point will vary within a rectangle zone which is defined as an assessment rectangle. Based on the interval model, the establishment of IFAC and the determination of the assessment rectangle are presented. Then according to the interval possibility degree method, the non-probabilistic reliability degree of IFAC can be determined. Meanwhile, in order to clearly introduce the non-probabilistic defect assessment method, a numerical example for the assessment of a pipe with crack is given. In addition, the assessment result of the proposed method is compared with that of the traditional probabilistic method, which confirms that this non-probabilistic defect assessment can reasonably resolve the practical problem with interval variables.

  19. CAD Parts-Based Assembly Modeling by Probabilistic Reasoning

    KAUST Repository

    Zhang, Kai-Ke

    2016-04-11

    Nowadays, increasing amount of parts and sub-assemblies are publicly available, which can be used directly for product development instead of creating from scratch. In this paper, we propose an interactive design framework for efficient and smart assembly modeling, in order to improve the design efficiency. Our approach is based on a probabilistic reasoning. Given a collection of industrial assemblies, we learn a probabilistic graphical model from the relationships between the parts of assemblies. Then in the modeling stage, this probabilistic model is used to suggest the most likely used parts compatible with the current assembly. Finally, the parts are assembled under certain geometric constraints. We demonstrate the effectiveness of our framework through a variety of assembly models produced by our prototype system. © 2015 IEEE.

  20. CAD Parts-Based Assembly Modeling by Probabilistic Reasoning

    KAUST Repository

    Zhang, Kai-Ke; Hu, Kai-Mo; Yin, Li-Cheng; Yan, Dongming; Wang, Bin

    2016-01-01

    Nowadays, increasing amount of parts and sub-assemblies are publicly available, which can be used directly for product development instead of creating from scratch. In this paper, we propose an interactive design framework for efficient and smart assembly modeling, in order to improve the design efficiency. Our approach is based on a probabilistic reasoning. Given a collection of industrial assemblies, we learn a probabilistic graphical model from the relationships between the parts of assemblies. Then in the modeling stage, this probabilistic model is used to suggest the most likely used parts compatible with the current assembly. Finally, the parts are assembled under certain geometric constraints. We demonstrate the effectiveness of our framework through a variety of assembly models produced by our prototype system. © 2015 IEEE.

  1. Controversies of Sex Re-assignment in Genetic Males with Congenital Inadequacy of the Penis.

    Science.gov (United States)

    Raveenthiran, Venkatachalam

    2017-09-01

    Sex assignment in 46XY genetic male children with congenital inadequacy of the penis (CIP) is controversial. Traditionally, children with penile length less than 2 cm at birth are considered unsuitable to be raised as males. They are typically re-assigned to female-sex and feminizing genitoplasty is usually done in infancy. However, the concept of cerebral androgen imprinting has caused paradigm shift in the philosophy of sex re-assignment. Masculinization of the brain, rather than length of the penis, is the modern criterion of sex re-assignment in CIP. This review summarizes the current understanding of the complex issue. In 46XY children with CIP, male-sex assignment appears appropriate in non-hormonal conditions such as idiopathic micropenis, aphallia and exstrophy. Female-sex re-assignment appears acceptable in complete androgen insensitivity (CAIS), while partial androgen insensitivity syndrome (PAIS) patients are highly dissatisfied with the assignment of either sex. Children with 5-alpha reductase deficiency are likely to have spontaneous penile lengthening at puberty. Hence, they are better raised as males. Although female assignment is common in pure gonadal dysgenesis, long-term results are not known to justify the decision.

  2. Resonance assignment of the NMR spectra of disordered proteins using a multi-objective non-dominated sorting genetic algorithm

    International Nuclear Information System (INIS)

    Yang, Yu; Fritzsching, Keith J.; Hong, Mei

    2013-01-01

    A multi-objective genetic algorithm is introduced to predict the assignment of protein solid-state NMR (SSNMR) spectra with partial resonance overlap and missing peaks due to broad linewidths, molecular motion, and low sensitivity. This non-dominated sorting genetic algorithm II (NSGA-II) aims to identify all possible assignments that are consistent with the spectra and to compare the relative merit of these assignments. Our approach is modeled after the recently introduced Monte-Carlo simulated-annealing (MC/SA) protocol, with the key difference that NSGA-II simultaneously optimizes multiple assignment objectives instead of searching for possible assignments based on a single composite score. The multiple objectives include maximizing the number of consistently assigned peaks between multiple spectra (“good connections”), maximizing the number of used peaks, minimizing the number of inconsistently assigned peaks between spectra (“bad connections”), and minimizing the number of assigned peaks that have no matching peaks in the other spectra (“edges”). Using six SSNMR protein chemical shift datasets with varying levels of imperfection that was introduced by peak deletion, random chemical shift changes, and manual peak picking of spectra with moderately broad linewidths, we show that the NSGA-II algorithm produces a large number of valid and good assignments rapidly. For high-quality chemical shift peak lists, NSGA-II and MC/SA perform similarly well. However, when the peak lists contain many missing peaks that are uncorrelated between different spectra and have chemical shift deviations between spectra, the modified NSGA-II produces a larger number of valid solutions than MC/SA, and is more effective at distinguishing good from mediocre assignments by avoiding the hazard of suboptimal weighting factors for the various objectives. These two advantages, namely diversity and better evaluation, lead to a higher probability of predicting the correct

  3. Resonance assignment of the NMR spectra of disordered proteins using a multi-objective non-dominated sorting genetic algorithm.

    Science.gov (United States)

    Yang, Yu; Fritzsching, Keith J; Hong, Mei

    2013-11-01

    A multi-objective genetic algorithm is introduced to predict the assignment of protein solid-state NMR (SSNMR) spectra with partial resonance overlap and missing peaks due to broad linewidths, molecular motion, and low sensitivity. This non-dominated sorting genetic algorithm II (NSGA-II) aims to identify all possible assignments that are consistent with the spectra and to compare the relative merit of these assignments. Our approach is modeled after the recently introduced Monte-Carlo simulated-annealing (MC/SA) protocol, with the key difference that NSGA-II simultaneously optimizes multiple assignment objectives instead of searching for possible assignments based on a single composite score. The multiple objectives include maximizing the number of consistently assigned peaks between multiple spectra ("good connections"), maximizing the number of used peaks, minimizing the number of inconsistently assigned peaks between spectra ("bad connections"), and minimizing the number of assigned peaks that have no matching peaks in the other spectra ("edges"). Using six SSNMR protein chemical shift datasets with varying levels of imperfection that was introduced by peak deletion, random chemical shift changes, and manual peak picking of spectra with moderately broad linewidths, we show that the NSGA-II algorithm produces a large number of valid and good assignments rapidly. For high-quality chemical shift peak lists, NSGA-II and MC/SA perform similarly well. However, when the peak lists contain many missing peaks that are uncorrelated between different spectra and have chemical shift deviations between spectra, the modified NSGA-II produces a larger number of valid solutions than MC/SA, and is more effective at distinguishing good from mediocre assignments by avoiding the hazard of suboptimal weighting factors for the various objectives. These two advantages, namely diversity and better evaluation, lead to a higher probability of predicting the correct assignment for a

  4. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  5. Accuracy of administratively-assigned ancestry for diverse populations in an electronic medical record-linked biobank.

    Directory of Open Access Journals (Sweden)

    Jacob B Hall

    Full Text Available Recently, the development of biobanks linked to electronic medical records has presented new opportunities for genetic and epidemiological research. Studies based on these resources, however, present unique challenges, including the accurate assignment of individual-level population ancestry. In this work we examine the accuracy of administratively-assigned race in diverse populations by comparing assigned races to genetically-defined ancestry estimates. Using 220 ancestry informative markers, we generated principal components for patients in our dataset, which were used to cluster patients into groups based on genetic ancestry. Consistent with other studies, we find a strong overall agreement (Kappa  = 0.872 between genetic ancestry and assigned race, with higher rates of agreement for African-descent and European-descent assignments, and reduced agreement for Hispanic, East Asian-descent, and South Asian-descent assignments. These results suggest caution when selecting study samples of non-African and non-European backgrounds when administratively-assigned race from biobanks is used.

  6. Optimal design of cluster-based ad-hoc networks using probabilistic solution discovery

    International Nuclear Information System (INIS)

    Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    The reliability of ad-hoc networks is gaining popularity in two areas: as a topic of academic interest and as a key performance parameter for defense systems employing this type of network. The ad-hoc network is dynamic and scalable and these descriptions are what attract its users. However, these descriptions are also synonymous for undefined and unpredictable when considering the impacts to the reliability of the system. The configuration of an ad-hoc network changes continuously and this fact implies that no single mathematical expression or graphical depiction can describe the system reliability-wise. Previous research has used mobility and stochastic models to address this challenge successfully. In this paper, the authors leverage the stochastic approach and build upon it a probabilistic solution discovery (PSD) algorithm to optimize the topology for a cluster-based mobile ad-hoc wireless network (MAWN). Specifically, the membership of nodes within the back-bone network or networks will be assigned in such as way as to maximize reliability subject to a constraint on cost. The constraint may also be considered as a non-monetary cost, such as weight, volume, power, or the like. When a cost is assigned to each component, a maximum cost threshold is assigned to the network, and the method is run; the result is an optimized allocation of the radios enabling back-bone network(s) to provide the most reliable network possible without exceeding the allowable cost. The method is intended for use directly as part of the architectural design process of a cluster-based MAWN to efficiently determine an optimal or near-optimal design solution. It is capable of optimizing the topology based upon all-terminal reliability (ATR), all-operating terminal reliability (AoTR), or two-terminal reliability (2TR)

  7. Radiation protection criteria for cases of probabilistic disruptive events

    International Nuclear Information System (INIS)

    Beninson, D.J.

    1985-01-01

    The individual risk limitation for the case of probabilistic disruptive events is studied, when the radiation effects cease to be only stochastic; the proposed criterion is applied for the case of high level waste repositories. The protection's optimization results from the differential cost-benefit. More general procedures of decision theory that use probabilistically defined utility functions are considered for its calculation. These more general procedures can be applied also in cases where radiation exposures are only potential, to optimize the required level of safety features. It is shown that for disruptive events of low probability and large resulting consequences, the concept of 'expectation' of consequence can not be used in decision making, but that the use of probabilistically based utility functions can conceptually assure a consistent approach in deciding the required level of safety. The use of utility functions of logaritmic form to assign weights to consequences involving different loss of life is explored (M.E.L.) [es

  8. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    Science.gov (United States)

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  9. Relative risk of probabilistic category learning deficits in patients with schizophrenia and their siblings

    Science.gov (United States)

    Weickert, Thomas W.; Goldberg, Terry E.; Egan, Michael F.; Apud, Jose A.; Meeter, Martijn; Myers, Catherine E.; Gluck, Mark A; Weinberger, Daniel R.

    2010-01-01

    Background While patients with schizophrenia display an overall probabilistic category learning performance deficit, the extent to which this deficit occurs in unaffected siblings of patients with schizophrenia is unknown. There are also discrepant findings regarding probabilistic category learning acquisition rate and performance in patients with schizophrenia. Methods A probabilistic category learning test was administered to 108 patients with schizophrenia, 82 unaffected siblings, and 121 healthy participants. Results Patients with schizophrenia displayed significant differences from their unaffected siblings and healthy participants with respect to probabilistic category learning acquisition rates. Although siblings on the whole failed to differ from healthy participants on strategy and quantitative indices of overall performance and learning acquisition, application of a revised learning criterion enabling classification into good and poor learners based on individual learning curves revealed significant differences between percentages of sibling and healthy poor learners: healthy (13.2%), siblings (34.1%), patients (48.1%), yielding a moderate relative risk. Conclusions These results clarify previous discrepant findings pertaining to probabilistic category learning acquisition rate in schizophrenia and provide the first evidence for the relative risk of probabilistic category learning abnormalities in unaffected siblings of patients with schizophrenia, supporting genetic underpinnings of probabilistic category learning deficits in schizophrenia. These findings also raise questions regarding the contribution of antipsychotic medication to the probabilistic category learning deficit in schizophrenia. The distinction between good and poor learning may be used to inform genetic studies designed to detect schizophrenia risk alleles. PMID:20172502

  10. Probabilistic composition of preferences, theory and applications

    CERN Document Server

    Parracho Sant'Anna, Annibal

    2015-01-01

    Putting forward a unified presentation of the features and possible applications of probabilistic preferences composition, and serving as a methodology for decisions employing multiple criteria, this book maximizes reader insights into the evaluation in probabilistic terms and the development of composition approaches that do not depend on assigning weights to the criteria. With key applications in important areas of management such as failure modes, effects analysis and productivity analysis – together with explanations about the application of the concepts involved –this book makes available numerical examples of probabilistic transformation development and probabilistic composition. Useful not only as a reference source for researchers, but also in teaching classes of graduate courses in Production Engineering and Management Science, the key themes of the book will be of especial interest to researchers in the field of Operational Research.

  11. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Hyungjin Kim

    2015-08-01

    Full Text Available Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments

  12. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  13. Comparing Categorical and Probabilistic Fingerprint Evidence.

    Science.gov (United States)

    Garrett, Brandon; Mitchell, Gregory; Scurich, Nicholas

    2018-04-23

    Fingerprint examiners traditionally express conclusions in categorical terms, opining that impressions do or do not originate from the same source. Recently, probabilistic conclusions have been proposed, with examiners estimating the probability of a match between recovered and known prints. This study presented a nationally representative sample of jury-eligible adults with a hypothetical robbery case in which an examiner opined on the likelihood that a defendant's fingerprints matched latent fingerprints in categorical or probabilistic terms. We studied model language developed by the U.S. Defense Forensic Science Center to summarize results of statistical analysis of the similarity between prints. Participant ratings of the likelihood the defendant left prints at the crime scene and committed the crime were similar when exposed to categorical and strong probabilistic match evidence. Participants reduced these likelihoods when exposed to the weaker probabilistic evidence, but did not otherwise discriminate among the prints assigned different match probabilities. © 2018 American Academy of Forensic Sciences.

  14. Rats bred for high alcohol drinking are more sensitive to delayed and probabilistic outcomes.

    Science.gov (United States)

    Wilhelm, C J; Mitchell, S H

    2008-10-01

    Alcoholics and heavy drinkers score higher on measures of impulsivity than nonalcoholics and light drinkers. This may be because of factors that predate drug exposure (e.g. genetics). This study examined the role of genetics by comparing impulsivity measures in ethanol-naive rats selectively bred based on their high [high alcohol drinking (HAD)] or low [low alcohol drinking (LAD)] consumption of ethanol. Replicates 1 and 2 of the HAD and LAD rats, developed by the University of Indiana Alcohol Research Center, completed two different discounting tasks. Delay discounting examines sensitivity to rewards that are delayed in time and is commonly used to assess 'choice' impulsivity. Probability discounting examines sensitivity to the uncertain delivery of rewards and has been used to assess risk taking and risk assessment. High alcohol drinking rats discounted delayed and probabilistic rewards more steeply than LAD rats. Discount rates associated with probabilistic and delayed rewards were weakly correlated, while bias was strongly correlated with discount rate in both delay and probability discounting. The results suggest that selective breeding for high alcohol consumption selects for animals that are more sensitive to delayed and probabilistic outcomes. Sensitivity to delayed or probabilistic outcomes may be predictive of future drinking in genetically predisposed individuals.

  15. ActionMap: A web-based software that automates loci assignments to framework maps.

    Science.gov (United States)

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-07-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/).

  16. Probabilistic Networks

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Lauritzen, Steffen Lilholt

    2001-01-01

    This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs.......This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs....

  17. An Individual-based Probabilistic Model for Fish Stock Simulation

    Directory of Open Access Journals (Sweden)

    Federico Buti

    2010-08-01

    Full Text Available We define an individual-based probabilistic model of a sole (Solea solea behaviour. The individual model is given in terms of an Extended Probabilistic Discrete Timed Automaton (EPDTA, a new formalism that is introduced in the paper and that is shown to be interpretable as a Markov decision process. A given EPDTA model can be probabilistically model-checked by giving a suitable translation into syntax accepted by existing model-checkers. In order to simulate the dynamics of a given population of soles in different environmental scenarios, an agent-based simulation environment is defined in which each agent implements the behaviour of the given EPDTA model. By varying the probabilities and the characteristic functions embedded in the EPDTA model it is possible to represent different scenarios and to tune the model itself by comparing the results of the simulations with real data about the sole stock in the North Adriatic sea, available from the recent project SoleMon. The simulator is presented and made available for its adaptation to other species.

  18. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  19. Learning Probabilistic Logic Models from Probabilistic Examples.

    Science.gov (United States)

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2008-10-01

    We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.

  20. Entropy-based Probabilistic Fatigue Damage Prognosis and Algorithmic Performance Comparison

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, a maximum entropy-based general framework for probabilistic fatigue damage prognosis is investigated. The proposed methodology is based on an...

  1. Entropy-based probabilistic fatigue damage prognosis and algorithmic performance comparison

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, a maximum entropy-based general framework for probabilistic fatigue damage prognosis is investigated. The proposed methodology is based on an...

  2. CONSERVATION. Genetic assignment of large seizures of elephant ivory reveals Africa's major poaching hotspots.

    Science.gov (United States)

    Wasser, S K; Brown, L; Mailand, C; Mondol, S; Clark, W; Laurie, C; Weir, B S

    2015-07-03

    Poaching of elephants is now occurring at rates that threaten African populations with extinction. Identifying the number and location of Africa's major poaching hotspots may assist efforts to end poaching and facilitate recovery of elephant populations. We genetically assign origin to 28 large ivory seizures (≥0.5 metric tons) made between 1996 and 2014, also testing assignment accuracy. Results suggest that the major poaching hotspots in Africa may be currently concentrated in as few as two areas. Increasing law enforcement in these two hotspots could help curtail future elephant losses across Africa and disrupt this organized transnational crime. Copyright © 2015, American Association for the Advancement of Science.

  3. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    artificial genetic system) string feature or ... called the genotype whereas it is called a structure in artificial genetic ... assigned a fitness value based on the cost function. Better ..... way it has produced complex, intelligent living organisms capable of ...

  4. The analysis of probability task completion; Taxonomy of probabilistic thinking-based across gender in elementary school students

    Science.gov (United States)

    Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi

    2017-08-01

    Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.

  5. Combination of Evidence with Different Weighting Factors: A Novel Probabilistic-Based Dissimilarity Measure Approach

    Directory of Open Access Journals (Sweden)

    Mengmeng Ma

    2015-01-01

    Full Text Available To solve the invalidation problem of Dempster-Shafer theory of evidence (DS with high conflict in multisensor data fusion, this paper presents a novel combination approach of conflict evidence with different weighting factors using a new probabilistic dissimilarity measure. Firstly, an improved probabilistic transformation function is proposed to map basic belief assignments (BBAs to probabilities. Then, a new dissimilarity measure integrating fuzzy nearness and introduced correlation coefficient is proposed to characterize not only the difference between basic belief functions (BBAs but also the divergence degree of the hypothesis that two BBAs support. Finally, the weighting factors used to reassign conflicts on BBAs are developed and Dempster’s rule is chosen to combine the discounted sources. Simple numerical examples are employed to demonstrate the merit of the proposed method. Through analysis and comparison of the results, the new combination approach can effectively solve the problem of conflict management with better convergence performance and robustness.

  6. submission of art studio-based assignments: students experience

    African Journals Online (AJOL)

    PUBLICATIONS1

    are reluctant to complete their studio assignments on time are critically ... tative and qualitative data, derived from survey and interviews were used to ... is therefore exploratory and studio based. It ... mogenous group of students who report pro- ... Assignment management .... The analyses in this study are based on data.

  7. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao; Wang, Shiqi; Zhang, Jian; Wang, Shanshe; Ma, Siwei

    2017-01-01

    , the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure

  8. A new automated assign and analysing method for high-resolution rotationally resolved spectra using genetic algorithms

    NARCIS (Netherlands)

    Meerts, W.L.; Schmitt, M.

    2006-01-01

    This paper describes a numerical technique that has recently been developed to automatically assign and fit high-resolution spectra. The method makes use of genetic algorithms (GA). The current algorithm is compared with previously used analysing methods. The general features of the GA and its

  9. Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome

    Science.gov (United States)

    Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah

    2017-06-01

    Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.

  10. Optimization (Alara) and probabilistic exposures: the application of optimization criteria to the control of risks due to exposures of a probabilistic nature

    International Nuclear Information System (INIS)

    Gonzalez, A.J.

    1989-01-01

    The paper described the application of the principles of optimization recommended by the International Commission on Radiological Protection (ICRP) to the restrain of radiation risks due to exposures that may or may not be incurred and to which a probability of occurrence can be assigned. After describing the concept of probabilistic exposures, it proposes a basis for a converging policy of control for both certain and probabilistic exposures, namely the dose-risk relationship adopted for radiation protection purposes. On that basis some coherent approaches for dealing with probabilistic exposures, such as the limitation of individual risks, are discussed. The optimization of safety for reducing all risks from probabilistic exposures to as-low-as-reasonably-achievable (ALARA) levels is reviewed in full. The principles of optimization of protection are used as a basic framework and the relevant factors to be taken into account when moving to probabilistic exposures are presented. The paper also reviews the decision-aiding techniques suitable for performing optimization with particular emphasis to the multi-attribute utility-analysis technique. Finally, there is a discussion on some practical application of decision-aiding multi-attribute utility analysis to probabilistic exposures including the use of probabilistic utilities. In its final outlook, the paper emphasizes the need for standardization and solutions to generic problems, if optimization of safety is to be successful

  11. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya

    2012-05-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.

  12. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cui, Mingjian [University of Texas at Dallas; Feng, Cong [University of Texas at Dallas; Wang, Zhenke [University of Texas at Dallas; Zhang, Jie [University of Texas at Dallas

    2018-02-01

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.

  13. Probabilistic logics and probabilistic networks

    CERN Document Server

    Haenni, Rolf; Wheeler, Gregory; Williamson, Jon; Andrews, Jill

    2014-01-01

    Probabilistic Logic and Probabilistic Networks presents a groundbreaking framework within which various approaches to probabilistic logic naturally fit. Additionally, the text shows how to develop computationally feasible methods to mesh with this framework.

  14. Confluence reduction for probabilistic systems

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    In this presentation we introduce a novel technique for state space reduction of probabilistic specifications, based on a newly developed notion of confluence for probabilistic automata. We proved that this reduction preserves branching probabilistic bisimulation and can be applied on-the-fly. To

  15. Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software

    Science.gov (United States)

    Hunter, George; Boisvert, Benjamin

    2013-01-01

    This document is the final report for the project entitled "Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software." This report consists of 17 sections which document the results of the several subtasks of this effort. The Probabilistic NAS Platform (PNP) is an air operations simulation platform developed and maintained by the Saab Sensis Corporation. The improvements made to the PNP simulation include the following: an airborne distributed separation assurance capability, a required time of arrival assignment and conformance capability, and a tactical and strategic weather avoidance capability.

  16. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    Science.gov (United States)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  17. Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs

    Science.gov (United States)

    Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan

    2016-04-01

    Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more

  18. Parentage assignment with genomic markers: a major advance for understanding and exploiting genetic variation of quantitative traits in farmed aquatic animals

    Directory of Open Access Journals (Sweden)

    Marc eVandeputte

    2014-12-01

    Full Text Available Since the middle of the 1990s, parentage assignment using microsatellite markers has been introduced as a tool in aquaculture breeding. It now allows close to 100% assignment success, and offered new ways to develop aquaculture breeding using mixed family designs in industry conditions. Its main achievements are the knowledge and control of family representation and inbreeding, especially in mass spawning species, above all the capacity to estimate reliable genetic parameters in any species and rearing system with no prior investment in structures, and the development of new breeding programs in many species. Parentage assignment should not be seen as a way to replace physical tagging, but as a new way to conceive breeding programs, which have to be optimized with its specific constraints, one of the most important being to well define the number of individuals to genotype to limit costs, maximize genetic gain while minimizing inbreeding. The recent possible shift to (for the moment more costly SNP markers should benefit from future developments in genomics and MAS selection to combine parentage assignment and indirect prediction of breeding values.

  19. A probabilistic EAC management of Ni-base Alloy in PWR

    International Nuclear Information System (INIS)

    Lee, Tae Hyun; Hwang, Il Soon

    2009-01-01

    Material aging is a principle cause for the aging of engineering systems that can lead to reduction in reliability and continued safety and increase in the cost of operation and maintenance. As the nuclear power plants get older, aging becomes an issue, because aging degradation can affect the structural integrity of systems and components in the same manner. To ensure the safe operation of nuclear power plants, it is essential to assess the effects of age-related degradation of plant structures, systems, and components. In this study, we propose a framework for probabilistic assessment of primary pressure-boundary components, with particular attention to Environmentally Assisted Cracking (EAC) of pipings and nozzles on Nuclear Power Plants (NPP). The framework on EAC management is targeted for the degradation prediction using mechanism and probabilistic treatment and probabilistic assessment of defect detection and sizing. Also, the EAC-induced failure process has examined the effect of uncertainties in key parameters in models for EAC growth model, final fracture and inspection, based on a sensitivity study and updating using Bayesian inference approach. (author)

  20. Memristive Probabilistic Computing

    KAUST Repository

    Alahmadi, Hamzah

    2017-10-01

    In the era of Internet of Things and Big Data, unconventional techniques are rising to accommodate the large size of data and the resource constraints. New computing structures are advancing based on non-volatile memory technologies and different processing paradigms. Additionally, the intrinsic resiliency of current applications leads to the development of creative techniques in computations. In those applications, approximate computing provides a perfect fit to optimize the energy efficiency while compromising on the accuracy. In this work, we build probabilistic adders based on stochastic memristor. Probabilistic adders are analyzed with respect of the stochastic behavior of the underlying memristors. Multiple adder implementations are investigated and compared. The memristive probabilistic adder provides a different approach from the typical approximate CMOS adders. Furthermore, it allows for a high area saving and design exibility between the performance and power saving. To reach a similar performance level as approximate CMOS adders, the memristive adder achieves 60% of power saving. An image-compression application is investigated using the memristive probabilistic adders with the performance and the energy trade-off.

  1. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cui, Mingjian [Univ. of Texas-Dallas, Richardson, TX (United States); Feng, Cong [Univ. of Texas-Dallas, Richardson, TX (United States); Wang, Zhenke [Univ. of Texas-Dallas, Richardson, TX (United States); Zhang, Jie [Univ. of Texas-Dallas, Richardson, TX (United States)

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.

  2. Staged decision making based on probabilistic forecasting

    Science.gov (United States)

    Booister, Nikéh; Verkade, Jan; Werner, Micha; Cranston, Michael; Cumiskey, Lydia; Zevenbergen, Chris

    2016-04-01

    Flood forecasting systems reduce, but cannot eliminate uncertainty about the future. Probabilistic forecasts explicitly show that uncertainty remains. However, as - compared to deterministic forecasts - a dimension is added ('probability' or 'likelihood'), with this added dimension decision making is made slightly more complicated. A technique of decision support is the cost-loss approach, which defines whether or not to issue a warning or implement mitigation measures (risk-based method). With the cost-loss method a warning will be issued when the ratio of the response costs to the damage reduction is less than or equal to the probability of the possible flood event. This cost-loss method is not widely used, because it motivates based on only economic values and is a technique that is relatively static (no reasoning, yes/no decision). Nevertheless it has high potential to improve risk-based decision making based on probabilistic flood forecasting because there are no other methods known that deal with probabilities in decision making. The main aim of this research was to explore the ways of making decision making based on probabilities with the cost-loss method better applicable in practice. The exploration began by identifying other situations in which decisions were taken based on uncertain forecasts or predictions. These cases spanned a range of degrees of uncertainty: from known uncertainty to deep uncertainty. Based on the types of uncertainties, concepts of dealing with situations and responses were analysed and possible applicable concepts where chosen. Out of this analysis the concepts of flexibility and robustness appeared to be fitting to the existing method. Instead of taking big decisions with bigger consequences at once, the idea is that actions and decisions are cut-up into smaller pieces and finally the decision to implement is made based on economic costs of decisions and measures and the reduced effect of flooding. The more lead-time there is in

  3. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical and unc...

  4. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2005-01-01

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical and unc...

  5. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  6. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    Science.gov (United States)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  7. Fuzzy-probabilistic multi agent system for breast cancer risk assessment and insurance premium assignment.

    Science.gov (United States)

    Tatari, Farzaneh; Akbarzadeh-T, Mohammad-R; Sabahi, Ahmad

    2012-12-01

    In this paper, we present an agent-based system for distributed risk assessment of breast cancer development employing fuzzy and probabilistic computing. The proposed fuzzy multi agent system consists of multiple fuzzy agents that benefit from fuzzy set theory to demonstrate their soft information (linguistic information). Fuzzy risk assessment is quantified by two linguistic variables of high and low. Through fuzzy computations, the multi agent system computes the fuzzy probabilities of breast cancer development based on various risk factors. By such ranking of high risk and low risk fuzzy probabilities, the multi agent system (MAS) decides whether the risk of breast cancer development is high or low. This information is then fed into an insurance premium adjuster in order to provide preventive decision making as well as to make appropriate adjustment of insurance premium and risk. This final step of insurance analysis also provides a numeric measure to demonstrate the utility of the approach. Furthermore, actual data are gathered from two hospitals in Mashhad during 1 year. The results are then compared with a fuzzy distributed approach. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Probabilistic numerical discrimination in mice.

    Science.gov (United States)

    Berkay, Dilara; Çavdaroğlu, Bilgehan; Balcı, Fuat

    2016-03-01

    Previous studies showed that both human and non-human animals can discriminate between different quantities (i.e., time intervals, numerosities) with a limited level of precision due to their endogenous/representational uncertainty. In addition, other studies have shown that subjects can modulate their temporal categorization responses adaptively by incorporating information gathered regarding probabilistic contingencies into their time-based decisions. Despite the psychophysical similarities between the interval timing and nonverbal counting functions, the sensitivity of count-based decisions to probabilistic information remains an unanswered question. In the current study, we investigated whether exogenous probabilistic information can be integrated into numerosity-based judgments by mice. In the task employed in this study, reward was presented either after few (i.e., 10) or many (i.e., 20) lever presses, the last of which had to be emitted on the lever associated with the corresponding trial type. In order to investigate the effect of probabilistic information on performance in this task, we manipulated the relative frequency of different trial types across different experimental conditions. We evaluated the behavioral performance of the animals under models that differed in terms of their assumptions regarding the cost of responding (e.g., logarithmically increasing vs. no response cost). Our results showed for the first time that mice could adaptively modulate their count-based decisions based on the experienced probabilistic contingencies in directions predicted by optimality.

  9. Suppression of panel flutter of near-space aircraft based on non-probabilistic reliability theory

    Directory of Open Access Journals (Sweden)

    Ye-Wei Zhang

    2016-03-01

    Full Text Available The vibration active control of the composite panels with the uncertain parameters in the hypersonic flow is studied using the non-probabilistic reliability theory. Using the piezoelectric patches as active control actuators, dynamic equations of panel are established by finite element method and Hamilton’s principle. And the control model of panel with uncertain parameters is obtained. According to the non-probabilistic reliability index, and besides being based on H∞ robust control theory and non-probabilistic reliability theory, the non-probabilistic reliability performance function is given. Moreover, the relationships between the robust controller and H∞ performance index and reliability are established. Numerical results show that the control method under the influence of reliability, H∞ performance index, and approaching velocity is effective to the vibration suppression of panel in the whole interval of uncertain parameters.

  10. Satellite Based Probabilistic Snow Cover Extent Mapping (SCE) at Hydro-Québec

    Science.gov (United States)

    Teasdale, Mylène; De Sève, Danielle; Angers, Jean-François; Perreault, Luc

    2016-04-01

    Over 40% of Canada's water resources are in Quebec and Hydro-Quebec has developed potential to become one of the largest producers of hydroelectricity in the world, with a total installed capacity of 36,643 MW. The Hydro-Québec fleet park includes 27 large reservoirs with a combined storage capacity of 176 TWh, and 668 dams and 98 controls. Thus, over 98% of all electricity used to supply the domestic market comes from water resources and the excess output is sold on the wholesale markets. In this perspective the efficient management of water resources is needed and it is based primarily on a good river flow estimation including appropriate hydrological data. Snow on ground is one of the significant variables representing 30% to 40% of its annual energy reserve. More specifically, information on snow cover extent (SCE) and snow water equivalent (SWE) is crucial for hydrological forecasting, particularly in northern regions since the snowmelt provides the water that fills the reservoirs and is subsequently used for hydropower generation. For several years Hydro Quebec's research institute ( IREQ) developed several algorithms to map SCE and SWE. So far all the methods were deterministic. However, given the need to maximize the efficient use of all resources while ensuring reliability, the electrical systems must now be managed taking into account all risks. Since snow cover estimation is based on limited spatial information, it is important to quantify and handle its uncertainty in the hydrological forecasting system. This paper presents the first results of a probabilistic algorithm for mapping SCE by combining Bayesian mixture of probability distributions and multiple logistic regression models applied to passive microwave data. This approach allows assigning for each grid point, probabilities to the set of the mutually exclusive discrete outcomes: "snow" and "no snow". Its performance was evaluated using the Brier score since it is particularly appropriate to

  11. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    Directory of Open Access Journals (Sweden)

    Xiangmin Guan

    2015-01-01

    Full Text Available Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology.

  12. Probabilistic confidence for decisions based on uncertain reliability estimates

    Science.gov (United States)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  13. Development of probabilistic fatigue curve for asphalt concrete based on viscoelastic continuum damage mechanics

    Directory of Open Access Journals (Sweden)

    Himanshu Sharma

    2016-07-01

    Full Text Available Due to its roots in fundamental thermodynamic framework, continuum damage approach is popular for modeling asphalt concrete behavior. Currently used continuum damage models use mixture averaged values for model parameters and assume deterministic damage process. On the other hand, significant scatter is found in fatigue data generated even under extremely controlled laboratory testing conditions. Thus, currently used continuum damage models fail to account the scatter observed in fatigue data. This paper illustrates a novel approach for probabilistic fatigue life prediction based on viscoelastic continuum damage approach. Several specimens were tested for their viscoelastic properties and damage properties under uniaxial mode of loading. The data thus generated were analyzed using viscoelastic continuum damage mechanics principles to predict fatigue life. Weibull (2 parameter, 3 parameter and lognormal distributions were fit to fatigue life predicted using viscoelastic continuum damage approach. It was observed that fatigue damage could be best-described using Weibull distribution when compared to lognormal distribution. Due to its flexibility, 3-parameter Weibull distribution was found to fit better than 2-parameter Weibull distribution. Further, significant differences were found between probabilistic fatigue curves developed in this research and traditional deterministic fatigue curve. The proposed methodology combines advantages of continuum damage mechanics as well as probabilistic approaches. These probabilistic fatigue curves can be conveniently used for reliability based pavement design. Keywords: Probabilistic fatigue curve, Continuum damage mechanics, Weibull distribution, Lognormal distribution

  14. A probabilistic method for testing and estimating selection differences between populations.

    Science.gov (United States)

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  15. A ligand predication tool based on modeling and reasoning with imprecise probabilistic knowledge.

    Science.gov (United States)

    Liu, Weiru; Yue, Anbu; Timson, David J

    2010-04-01

    Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool. 2009 Elsevier Ireland Ltd. All rights reserved.

  16. Identifying the source of farmed escaped Atlantic salmon (Salmo salar): Bayesian clustering analysis increases accuracy of assignment

    DEFF Research Database (Denmark)

    Glover, Kevin A.; Hansen, Michael Møller; Skaala, Oystein

    2009-01-01

    44 cages located on 26 farms in the Hardangerfjord, western Norway. This fjord represents one of the major salmon farming areas in Norway, with a production of 57,000 t in 2007. Based upon genetic data from 17 microsatellite markers, significant but highly variable differentiation was observed among....... Accuracy of assignment varied greatly among the individual samples. For the Bayesian clustered data set consisting of five genetic groups, overall accuracy of self-assignment was 99%, demonstrating the effectiveness of this strategy to significantly increase accuracy of assignment, albeit at the expense...

  17. Web-Based Problem-Solving Assignment and Grading System

    Science.gov (United States)

    Brereton, Giles; Rosenberg, Ronald

    2014-11-01

    In engineering courses with very specific learning objectives, such as fluid mechanics and thermodynamics, it is conventional to reinforce concepts and principles with problem-solving assignments and to measure success in problem solving as an indicator of student achievement. While the modern-day ease of copying and searching for online solutions can undermine the value of traditional assignments, web-based technologies also provide opportunities to generate individualized well-posed problems with an infinite number of different combinations of initial/final/boundary conditions, so that the probability of any two students being assigned identical problems in a course is vanishingly small. Such problems can be designed and programmed to be: single or multiple-step, self-grading, allow students single or multiple attempts; provide feedback when incorrect; selectable according to difficulty; incorporated within gaming packages; etc. In this talk, we discuss the use of a homework/exam generating program of this kind in a single-semester course, within a web-based client-server system that ensures secure operation.

  18. Probabilistic pathway construction.

    Science.gov (United States)

    Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha

    2011-07-01

    Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Probabilistic and machine learning-based retrieval approaches for biomedical dataset retrieval

    Science.gov (United States)

    Karisani, Payam; Qin, Zhaohui S; Agichtein, Eugene

    2018-01-01

    Abstract The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie

  20. Species delineation using Bayesian model-based assignment tests: a case study using Chinese toad-headed agamas (genus Phrynocephalus

    Directory of Open Access Journals (Sweden)

    Fu Jinzhong

    2010-06-01

    Full Text Available Abstract Background Species are fundamental units in biology, yet much debate exists surrounding how we should delineate species in nature. Species discovery now requires the use of separate, corroborating datasets to quantify independently evolving lineages and test species criteria. However, the complexity of the speciation process has ushered in a need to infuse studies with new tools capable of aiding in species delineation. We suggest that model-based assignment tests are one such tool. This method circumvents constraints with traditional population genetic analyses and provides a novel means of describing cryptic and complex diversity in natural systems. Using toad-headed agamas of the Phrynocephalus vlangalii complex as a case study, we apply model-based assignment tests to microsatellite DNA data to test whether P. putjatia, a controversial species that closely resembles P. vlangalii morphologically, represents a valid species. Mitochondrial DNA and geographic data are also included to corroborate the assignment test results. Results Assignment tests revealed two distinct nuclear DNA clusters with 95% (230/243 of the individuals being assigned to one of the clusters with > 90% probability. The nuclear genomes of the two clusters remained distinct in sympatry, particularly at three syntopic sites, suggesting the existence of reproductive isolation between the identified clusters. In addition, a mitochondrial ND2 gene tree revealed two deeply diverged clades, which were largely congruent with the two nuclear DNA clusters, with a few exceptions. Historical mitochondrial introgression events between the two groups might explain the disagreement between the mitochondrial and nuclear DNA data. The nuclear DNA clusters and mitochondrial clades corresponded nicely to the hypothesized distributions of P. vlangalii and P. putjatia. Conclusions These results demonstrate that assignment tests based on microsatellite DNA data can be powerful tools

  1. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    Science.gov (United States)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  2. Confluence Reduction for Probabilistic Systems (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2010-01-01

    This paper presents a novel technique for state space reduction of probabilistic specifications, based on a newly developed notion of confluence for probabilistic automata. We prove that this reduction preserves branching probabilistic bisimulation and can be applied on-the-fly. To support the

  3. Moving beyond "Bookish Knowledge": Using Film-Based Assignments to Promote Deep Learning

    Science.gov (United States)

    Olson, Joann S.; Autry, Linda; Moe, Jeffry

    2016-01-01

    This article investigates the effectiveness of a film-based assignment given to adult learners in a graduate-level group counseling class. Semi-structured interviews were conducted with four students; data analysis suggested film-based assignments may promote deep approaches to learning (DALs). Participants indicated the assignment helped them…

  4. Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events

    Science.gov (United States)

    DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.

  5. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    Science.gov (United States)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  6. Probabilistic safety analysis vs probabilistic fracture mechanics -relation and necessary merging

    International Nuclear Information System (INIS)

    Nilsson, Fred

    1997-01-01

    A comparison is made between some general features of probabilistic fracture mechanics (PFM) and probabilistic safety assessment (PSA) in its standard form. We conclude that: Result from PSA is a numerically expressed level of confidence in the system based on the state of current knowledge. It is thus not any objective measure of risk. It is important to carefully define the precise nature of the probabilistic statement and relate it to a well defined situation. Standardisation of PFM methods is necessary. PFM seems to be the only way to obtain estimates of the pipe break probability. Service statistics are of doubtful value because of scarcity of data and statistical inhomogeneity. Collection of service data should be directed towards the occurrence of growing cracks

  7. 14th International Probabilistic Workshop

    CERN Document Server

    Taerwe, Luc; Proske, Dirk

    2017-01-01

    This book presents the proceedings of the 14th International Probabilistic Workshop that was held in Ghent, Belgium in December 2016. Probabilistic methods are currently of crucial importance for research and developments in the field of engineering, which face challenges presented by new materials and technologies and rapidly changing societal needs and values. Contemporary needs related to, for example, performance-based design, service-life design, life-cycle analysis, product optimization, assessment of existing structures and structural robustness give rise to new developments as well as accurate and practically applicable probabilistic and statistical engineering methods to support these developments. These proceedings are a valuable resource for anyone interested in contemporary developments in the field of probabilistic engineering applications.

  8. Integration of Evidence Base into a Probabilistic Risk Assessment

    Science.gov (United States)

    Saile, Lyn; Lopez, Vilma; Bickham, Grandin; Kerstman, Eric; FreiredeCarvalho, Mary; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    INTRODUCTION: A probabilistic decision support model such as the Integrated Medical Model (IMM) utilizes an immense amount of input data that necessitates a systematic, integrated approach for data collection, and management. As a result of this approach, IMM is able to forecasts medical events, resource utilization and crew health during space flight. METHODS: Inflight data is the most desirable input for the Integrated Medical Model. Non-attributable inflight data is collected from the Lifetime Surveillance for Astronaut Health study as well as the engineers, flight surgeons, and astronauts themselves. When inflight data is unavailable cohort studies, other models and Bayesian analyses are used, in addition to subject matters experts input on occasion. To determine the quality of evidence of a medical condition, the data source is categorized and assigned a level of evidence from 1-5; the highest level is one. The collected data reside and are managed in a relational SQL database with a web-based interface for data entry and review. The database is also capable of interfacing with outside applications which expands capabilities within the database itself. Via the public interface, customers can access a formatted Clinical Findings Form (CLiFF) that outlines the model input and evidence base for each medical condition. Changes to the database are tracked using a documented Configuration Management process. DISSCUSSION: This strategic approach provides a comprehensive data management plan for IMM. The IMM Database s structure and architecture has proven to support additional usages. As seen by the resources utilization across medical conditions analysis. In addition, the IMM Database s web-based interface provides a user-friendly format for customers to browse and download the clinical information for medical conditions. It is this type of functionality that will provide Exploratory Medicine Capabilities the evidence base for their medical condition list

  9. Feature Selection and Fault Classification of Reciprocating Compressors using a Genetic Algorithm and a Probabilistic Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, M; Gu, F; Ball, A, E-mail: M.Ahmed@hud.ac.uk [Diagnostic Engineering Research Group, University of Huddersfield, HD1 3DH (United Kingdom)

    2011-07-19

    Reciprocating compressors are widely used in industry for various purposes and faults occurring in them can degrade their performance, consume additional energy and even cause severe damage to the machine. Vibration monitoring techniques are often used for early fault detection and diagnosis, but it is difficult to prescribe a given set of effective diagnostic features because of the wide variety of operating conditions and the complexity of the vibration signals which originate from the many different vibrating and impact sources. This paper studies the use of genetic algorithms (GAs) and neural networks (NNs) to select effective diagnostic features for the fault diagnosis of a reciprocating compressor. A large number of common features are calculated from the time and frequency domains and envelope analysis. Applying GAs and NNs to these features found that envelope analysis has the most potential for differentiating three common faults: valve leakage, inter-cooler leakage and a loose drive belt. Simultaneously, the spread parameter of the probabilistic NN was also optimised. The selected subsets of features were examined based on vibration source characteristics. The approach developed and the trained NN are confirmed as possessing general characteristics for fault detection and diagnosis.

  10. Feature Selection and Fault Classification of Reciprocating Compressors using a Genetic Algorithm and a Probabilistic Neural Network

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A

    2011-01-01

    Reciprocating compressors are widely used in industry for various purposes and faults occurring in them can degrade their performance, consume additional energy and even cause severe damage to the machine. Vibration monitoring techniques are often used for early fault detection and diagnosis, but it is difficult to prescribe a given set of effective diagnostic features because of the wide variety of operating conditions and the complexity of the vibration signals which originate from the many different vibrating and impact sources. This paper studies the use of genetic algorithms (GAs) and neural networks (NNs) to select effective diagnostic features for the fault diagnosis of a reciprocating compressor. A large number of common features are calculated from the time and frequency domains and envelope analysis. Applying GAs and NNs to these features found that envelope analysis has the most potential for differentiating three common faults: valve leakage, inter-cooler leakage and a loose drive belt. Simultaneously, the spread parameter of the probabilistic NN was also optimised. The selected subsets of features were examined based on vibration source characteristics. The approach developed and the trained NN are confirmed as possessing general characteristics for fault detection and diagnosis.

  11. Delineating probabilistic species pools in ecology and biogeography

    OpenAIRE

    Karger, Dirk Nikolaus; Cord, Anna F; Kessler, Michael; Kreft, Holger; Kühn, Ingolf; Pompe, Sven; Sandel, Brody; Sarmento Cabral, Juliano; Smith, Adam B; Svenning, Jens-Christian; Tuomisto, Hanna; Weigelt, Patrick; Wesche, Karsten

    2016-01-01

    Aim To provide a mechanistic and probabilistic framework for defining the species pool based on species-specific probabilities of dispersal, environmental suitability and biotic interactions within a specific temporal extent, and to show how probabilistic species pools can help disentangle the geographical structure of different community assembly processes. Innovation Probabilistic species pools provide an improved species pool definition based on probabilities in conjuncti...

  12. Probabilistic safety assessment model in consideration of human factors based on object-oriented bayesian networks

    International Nuclear Information System (INIS)

    Zhou Zhongbao; Zhou Jinglun; Sun Quan

    2007-01-01

    Effect of Human factors on system safety is increasingly serious, which is often ignored in traditional probabilistic safety assessment methods however. A new probabilistic safety assessment model based on object-oriented Bayesian networks is proposed in this paper. Human factors are integrated into the existed event sequence diagrams. Then the classes of the object-oriented Bayesian networks are constructed which are converted to latent Bayesian networks for inference. Finally, the inference results are integrated into event sequence diagrams for probabilistic safety assessment. The new method is applied to the accident of loss of coolant in a nuclear power plant. the results show that the model is not only applicable to real-time situation assessment, but also applicable to situation assessment based certain amount of information. The modeling complexity is kept down and the new method is appropriate to large complex systems due to the thoughts of object-oriented. (authors)

  13. Probabilistic wind power forecasting based on logarithmic transformation and boundary kernel

    International Nuclear Information System (INIS)

    Zhang, Yao; Wang, Jianxue; Luo, Xu

    2015-01-01

    Highlights: • Quantitative information on the uncertainty of wind power generation. • Kernel density estimator provides non-Gaussian predictive distributions. • Logarithmic transformation reduces the skewness of wind power density. • Boundary kernel method eliminates the density leakage near the boundary. - Abstracts: Probabilistic wind power forecasting not only produces the expectation of wind power output, but also gives quantitative information on the associated uncertainty, which is essential for making better decisions about power system and market operations with the increasing penetration of wind power generation. This paper presents a novel kernel density estimator for probabilistic wind power forecasting, addressing two characteristics of wind power which have adverse impacts on the forecast accuracy, namely, the heavily skewed and double-bounded nature of wind power density. Logarithmic transformation is used to reduce the skewness of wind power density, which improves the effectiveness of the kernel density estimator in a transformed scale. Transformations partially relieve the boundary effect problem of the kernel density estimator caused by the double-bounded nature of wind power density. However, the case study shows that there are still some serious problems of density leakage after the transformation. In order to solve this problem in the transformed scale, a boundary kernel method is employed to eliminate the density leak at the bounds of wind power distribution. The improvement of the proposed method over the standard kernel density estimator is demonstrated by short-term probabilistic forecasting results based on the data from an actual wind farm. Then, a detailed comparison is carried out of the proposed method and some existing probabilistic forecasting methods

  14. Students’ difficulties in probabilistic problem-solving

    Science.gov (United States)

    Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.

    2018-03-01

    There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.

  15. Probabilistic Decision Graphs - Combining Verification and AI Techniques for Probabilistic Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2004-01-01

    We adopt probabilistic decision graphs developed in the field of automated verification as a tool for probabilistic model representation and inference. We show that probabilistic inference has linear time complexity in the size of the probabilistic decision graph, that the smallest probabilistic ...

  16. A convergence theory for probabilistic metric spaces | Jäger ...

    African Journals Online (AJOL)

    We develop a theory of probabilistic convergence spaces based on Tardiff's neighbourhood systems for probabilistic metric spaces. We show that the resulting category is a topological universe and we characterize a subcategory that is isomorphic to the category of probabilistic metric spaces. Keywords: Probabilistic metric ...

  17. A Practical Probabilistic Graphical Modeling Tool for Weighing Ecological Risk-Based Evidence

    Science.gov (United States)

    Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for e...

  18. Is Probabilistic Evidence a Source of Knowledge?

    Science.gov (United States)

    Friedman, Ori; Turri, John

    2015-01-01

    We report a series of experiments examining whether people ascribe knowledge for true beliefs based on probabilistic evidence. Participants were less likely to ascribe knowledge for beliefs based on probabilistic evidence than for beliefs based on perceptual evidence (Experiments 1 and 2A) or testimony providing causal information (Experiment 2B).…

  19. Probabilistic Cross-identification in Crowded Fields as an Assignment Problem

    Science.gov (United States)

    Budavári, Tamás; Basu, Amitabh

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  20. PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM

    Energy Technology Data Exchange (ETDEWEB)

    Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu [Dept. of Applied Mathematics and Statistics, Johns Hopkins University, 3400 N. Charles St., MD 21218 (United States)

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  1. PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM

    International Nuclear Information System (INIS)

    Budavári, Tamás; Basu, Amitabh

    2016-01-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  2. AN EVOLUTIONARY ALGORITHM FOR CHANNEL ASSIGNMENT PROBLEM IN WIRELESS MOBILE NETWORKS

    Directory of Open Access Journals (Sweden)

    Yee Shin Chia

    2012-12-01

    Full Text Available The channel assignment problem in wireless mobile network is the assignment of appropriate frequency spectrum to incoming calls while maintaining a satisfactory level of electromagnetic compatibility (EMC constraints. An effective channel assignment strategy is important due to the limited capacity of frequency spectrum in wireless mobile network. Most of the existing channel assignment strategies are based on deterministic methods. In this paper, an adaptive genetic algorithm (GA based channel assignment strategy is introduced for resource management and to reduce the effect of EMC interferences. The most significant advantage of the proposed optimization method is its capability to handle both the reassignment of channels for existing calls as well as the allocation of channel to a new incoming call in an adaptive process to maximize the utility of the limited resources. It is capable to adapt the population size to the number of eligible channels for a particular cell upon new call arrivals to achieve reasonable convergence speed. The MATLAB simulation on a 49-cells network model for both uniform and nonuniform call traffic distributions showed that the proposed channel optimization method can always achieve a lower average new incoming call blocking probability compared to the deterministic based channel assignment strategy.

  3. Genetic diversity and population structure of the Guinea pig (Cavia porcellus, Rodentia, Caviidae) in Colombia.

    Science.gov (United States)

    Burgos-Paz, William; Cerón-Muñoz, Mario; Solarte-Portilla, Carlos

    2011-10-01

    The aim was to establish the genetic diversity and population structure of three guinea pig lines, from seven production zones located in Nariño, southwest Colombia. A total of 384 individuals were genotyped with six microsatellite markers. The measurement of intrapopulation diversity revealed allelic richness ranging from 3.0 to 6.56, and observed heterozygosity (Ho) from 0.33 to 0.60, with a deficit in heterozygous individuals. Although statistically significant (p guinea-pig lines and populations, coincided with the historical and geographical distribution of the populations. Likewise, high genetic identity between improved and native lines was established. An analysis of group probabilistic assignment revealed that each line should not be considered as a genetically homogeneous group. The findings corroborate the absorption of native genetic material into the improved line introduced into Colombia from Peru. It is necessary to establish conservation programs for native-line individuals in Nariño, and control genealogical and production records in order to reduce the inbreeding values in the populations.

  4. Tracing the geographic origin of traded leopard body parts in the indian subcontinent with DNA-based assignment tests.

    Science.gov (United States)

    Mondol, Samrat; Sridhar, Vanjulavalli; Yadav, Prasanjeet; Gubbi, Sanjay; Ramakrishnan, Uma

    2015-04-01

    Illicit trade in wildlife products is rapidly decimating many species across the globe. Such trade is often underestimated for wide-ranging species until it is too late for the survival of their remaining populations. Policing this trade could be vastly improved if one could reliably determine geographic origins of illegal wildlife products and identify areas where greater enforcement is needed. Using DNA-based assignment tests (i.e., samples are assigned to geographic locations), we addressed these factors for leopards (Panthera pardus) on the Indian subcontinent. We created geography-specific allele frequencies from a genetic reference database of 173 leopards across India to infer geographic origins of DNA samples from 40 seized leopard skins. Sensitivity analyses of samples of known geographic origins and assignments of seized skins demonstrated robust assignments for Indian leopards. We found that confiscated pelts seized in small numbers were not necessarily from local leopards. The geographic footprint of large seizures appeared to be bigger than the cumulative footprint of several smaller seizures, indicating widespread leopard poaching across the subcontinent. Our seized samples had male-biased sex ratios, especially the large seizures. From multiple seized sample assignments, we identified central India as a poaching hotspot for leopards. The techniques we applied can be used to identify origins of seized illegal wildlife products and trade routes at the subcontinent scale and beyond. © 2014 Society for Conservation Biology.

  5. Interactive Level Design for iOS Assignment Delivery: A Case Study

    Directory of Open Access Journals (Sweden)

    Anson Brown

    2014-02-01

    Full Text Available This paper presents an application of an iOS-based online gaming assignment in a real classroom. The core concept of the project is a gameplay environment involving two players that have full control over creation and modification of levels. This level design mechanism was implemented in an iOS-based game in the area of genetics and based on an existing written assignment. The game includes support for both instructors, who have the ability to create and post assignments and students, who can take the assignments. Two trials of the iOS application consisted of in-class testing of twenty- one students. Students first took the original paper assignment, followed by the iOS version. Start times, end times, and grades were recorded for both versions. A comprehensive study of the grades and times for the iOS version of the assignment versus the paper version was conducted and is presented in this paper. Our Study showed that the iOS version was completed much faster in nearly every case while a strong delivery mechanism is needed to ensure student grades and completion of the assignment will not be affected. These results are not unexpected due to some major difference between the two formats. Future updates and additions will address any currently existing issues.

  6. A Method Based on Dial's Algorithm for Multi-time Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Rongjie Kuang

    2014-03-01

    Full Text Available Due to static traffic assignment has poor performance in reflecting actual case and dynamic traffic assignment may incurs excessive compute cost, method of multi-time dynamic traffic assignment combining static and dynamic traffic assignment balances factors of precision and cost effectively. A method based on Dial's logit algorithm is proposed in the article to solve the dynamic stochastic user equilibrium problem in dynamic traffic assignment. Before that, a fitting function that can proximately reflect overloaded traffic condition of link is proposed and used to give corresponding model. Numerical example is given to illustrate heuristic procedure of method and to compare results with one of same example solved by other literature's algorithm. Results show that method based on Dial's algorithm is preferable to algorithm from others.

  7. Arbitrage and Hedging in a non probabilistic framework

    OpenAIRE

    Alvarez, Alexander; Ferrando, Sebastian; Olivares, Pablo

    2011-01-01

    The paper studies the concepts of hedging and arbitrage in a non probabilistic framework. It provides conditions for non probabilistic arbitrage based on the topological structure of the trajectory space and makes connections with the usual notion of arbitrage. Several examples illustrate the non probabilistic arbitrage as well perfect replication of options under continuous and discontinuous trajectories, the results can then be applied in probabilistic models path by path. The approach is r...

  8. Using probabilistic theory to develop interpretation guidelines for Y-STR profiles.

    Science.gov (United States)

    Taylor, Duncan; Bright, Jo-Anne; Buckleton, John

    2016-03-01

    Y-STR profiling makes up a small but important proportion of forensic DNA casework. Often Y-STR profiles are used when autosomal profiling has failed to yield an informative result. Consequently Y-STR profiles are often from the most challenging samples. In addition to these points, Y-STR loci are linked, meaning that evaluation of haplotype probabilities are either based on overly simplified counting methods or computationally costly genetic models, neither of which extend well to the evaluation of mixed Y-STR data. For all of these reasons Y-STR data analysis has not seen the same advances as autosomal STR data. We present here a probabilistic model for the interpretation of Y-STR data. Due to the fact that probabilistic systems for Y-STR data are still some way from reaching active casework, we also describe how data can be analysed in a continuous way to generate interpretational thresholds and guidelines. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Research on probabilistic assessment method based on the corroded pipeline assessment criteria

    International Nuclear Information System (INIS)

    Zhang Guangli; Luo, Jinheng; Zhao Xinwei; Zhang Hua; Zhang Liang; Zhang Yi

    2012-01-01

    Pipeline integrity assessments are performed using conventional deterministic approaches, even though there are many uncertainties about the parameters in the pipeline integrity assessment. In this paper, a probabilistic assessment method is provided for the gas pipeline with corrosion defects based on the current corroded pipe evaluation criteria, and the failure probability of corroded pipelines due to the uncertainties of loadings, material property and measurement accuracy is estimated using Monte-Carlo technique. Furthermore, the sensitivity analysis approach is introduced to rank the influence of various random variables to the safety of pipeline. And the method to determine the critical defect size based on acceptable failure probability is proposed. Highlights: ► The folias factor in pipeline corrosion assessment methods was analyzed. ► The probabilistic method was applied in corrosion assessment methods. ► The influence of assessment variables to the reliability of pipeline was ranked. ► The acceptable failure probability was used to determine the critical defect size.

  10. Probabilistic modeling of timber structures

    DEFF Research Database (Denmark)

    Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2007-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet...... Publication: www.jcss.ethz.ch; 2001] and of the COST action E24 ‘Reliability of Timber Structures' [COST Action E 24, Reliability of timber structures. Several meetings and Publications, Internet Publication: http://www.km.fgg.uni-lj.si/coste24/coste24.htm; 2005]. The present proposal is based on discussions...... and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for timber components. The recommended probabilistic model for these basic properties...

  11. Empirical Selection of Informative Microsatellite Markers within Co-ancestry Pig Populations Is Required for Improving the Individual Assignment Efficiency

    Directory of Open Access Journals (Sweden)

    Y. H. Li

    2014-05-01

    Full Text Available The Lanyu is a miniature pig breed indigenous to Lanyu Island, Taiwan. It is distantly related to Asian and European pig breeds. It has been inbred to generate two breeds and crossed with Landrace and Duroc to produce two hybrids for laboratory use. Selecting sets of informative genetic markers to track the genetic qualities of laboratory animals and stud stock is an important function of genetic databases. For more than two decades, Lanyu derived breeds of common ancestry and crossbreeds have been used to examine the effectiveness of genetic marker selection and optimal approaches for individual assignment. In this paper, these pigs and the following breeds: Berkshire, Duroc, Landrace and Yorkshire, Meishan and Taoyuan, TLRI Black Pig No. 1, and Kaohsiung Animal Propagation Station Black pig are studied to build a genetic reference database. Nineteen microsatellite markers (loci provide information on genetic variation and differentiation among studied breeds. High differentiation index (FST and Cavalli-Sforza chord distances give genetic differentiation among breeds, including Lanyu’s inbred populations. Inbreeding values (FIS show that Lanyu and its derived inbred breeds have significant loss of heterozygosity. Individual assignment testing of 352 animals was done with different numbers of microsatellite markers in this study. The testing assigned 99% of the animals successfully into their correct reference populations based on 9 to 14 markers ranking D-scores, allelic number, expected heterozygosity (HE or FST, respectively. All miss-assigned individuals came from close lineage Lanyu breeds. To improve individual assignment among close lineage breeds, microsatellite markers selected from Lanyu populations with high polymorphic, heterozygosity, FST and D-scores were used. Only 6 to 8 markers ranking HE, FST or allelic number were required to obtain 99% assignment accuracy. This result suggests empirical examination of assignment-error rates

  12. Genetic interaction motif finding by expectation maximization – a novel statistical model for inferring gene modules from synthetic lethality

    Directory of Open Access Journals (Sweden)

    Ye Ping

    2005-12-01

    Full Text Available Abstract Background Synthetic lethality experiments identify pairs of genes with complementary function. More direct functional associations (for example greater probability of membership in a single protein complex may be inferred between genes that share synthetic lethal interaction partners than genes that are directly synthetic lethal. Probabilistic algorithms that identify gene modules based on motif discovery are highly appropriate for the analysis of synthetic lethal genetic interaction data and have great potential in integrative analysis of heterogeneous datasets. Results We have developed Genetic Interaction Motif Finding (GIMF, an algorithm for unsupervised motif discovery from synthetic lethal interaction data. Interaction motifs are characterized by position weight matrices and optimized through expectation maximization. Given a seed gene, GIMF performs a nonlinear transform on the input genetic interaction data and automatically assigns genes to the motif or non-motif category. We demonstrate the capacity to extract known and novel pathways for Saccharomyces cerevisiae (budding yeast. Annotations suggested for several uncharacterized genes are supported by recent experimental evidence. GIMF is efficient in computation, requires no training and automatically down-weights promiscuous genes with high degrees. Conclusion GIMF effectively identifies pathways from synthetic lethality data with several unique features. It is mostly suitable for building gene modules around seed genes. Optimal choice of one single model parameter allows construction of gene networks with different levels of confidence. The impact of hub genes the generic probabilistic framework of GIMF may be used to group other types of biological entities such as proteins based on stochastic motifs. Analysis of the strongest motifs discovered by the algorithm indicates that synthetic lethal interactions are depleted between genes within a motif, suggesting that synthetic

  13. Structural reliability codes for probabilistic design

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    1997-01-01

    probabilistic code format has not only strong influence on the formal reliability measure, but also on the formal cost of failure to be associated if a design made to the target reliability level is considered to be optimal. In fact, the formal cost of failure can be different by several orders of size for two...... different, but by and large equally justifiable probabilistic code formats. Thus, the consequence is that a code format based on decision theoretical concepts and formulated as an extension of a probabilistic code format must specify formal values to be used as costs of failure. A principle of prudence...... is suggested for guiding the choice of the reference probabilistic code format for constant reliability. In the author's opinion there is an urgent need for establishing a standard probabilistic reliability code. This paper presents some considerations that may be debatable, but nevertheless point...

  14. Automated Bug Assignment: Ensemble-based Machine Learning in Large Scale Industrial Contexts

    OpenAIRE

    Jonsson, Leif; Borg, Markus; Broman, David; Sandahl, Kristian; Eldh, Sigrid; Runeson, Per

    2016-01-01

    Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learni...

  15. Probabilistic modelling and analysis of stand-alone hybrid power systems

    International Nuclear Information System (INIS)

    Lujano-Rojas, Juan M.; Dufo-López, Rodolfo; Bernal-Agustín, José L.

    2013-01-01

    As a part of the Hybrid Intelligent Algorithm, a model based on an ANN (artificial neural network) has been proposed in this paper to represent hybrid system behaviour considering the uncertainty related to wind speed and solar radiation, battery bank lifetime, and fuel prices. The Hybrid Intelligent Algorithm suggests a combination of probabilistic analysis based on a Monte Carlo simulation approach and artificial neural network training embedded in a genetic algorithm optimisation model. The installation of a typical hybrid system was analysed. Probabilistic analysis was used to generate an input–output dataset of 519 samples that was later used to train the ANNs to reduce the computational effort required. The generalisation ability of the ANNs was measured in terms of RMSE (Root Mean Square Error), MBE (Mean Bias Error), MAE (Mean Absolute Error), and R-squared estimators using another data group of 200 samples. The results obtained from the estimation of the expected energy not supplied, the probability of a determined reliability level, and the estimation of expected value of net present cost show that the presented model is able to represent the main characteristics of a typical hybrid power system under uncertain operating conditions. - Highlights: • This paper presents a probabilistic model for stand-alone hybrid power system. • The model considers the main sources of uncertainty related to renewable resources. • The Hybrid Intelligent Algorithm has been applied to represent hybrid system behaviour. • The installation of a typical hybrid system was analysed. • The results obtained from the study case validate the presented model

  16. Using ELM-based weighted probabilistic model in the classification of synchronous EEG BCI.

    Science.gov (United States)

    Tan, Ping; Tan, Guan-Zheng; Cai, Zi-Xing; Sa, Wei-Ping; Zou, Yi-Qun

    2017-01-01

    Extreme learning machine (ELM) is an effective machine learning technique with simple theory and fast implementation, which has gained increasing interest from various research fields recently. A new method that combines ELM with probabilistic model method is proposed in this paper to classify the electroencephalography (EEG) signals in synchronous brain-computer interface (BCI) system. In the proposed method, the softmax function is used to convert the ELM output to classification probability. The Chernoff error bound, deduced from the Bayesian probabilistic model in the training process, is adopted as the weight to take the discriminant process. Since the proposed method makes use of the knowledge from all preceding training datasets, its discriminating performance improves accumulatively. In the test experiments based on the datasets from BCI competitions, the proposed method is compared with other classification methods, including the linear discriminant analysis, support vector machine, ELM and weighted probabilistic model methods. For comparison, the mutual information, classification accuracy and information transfer rate are considered as the evaluation indicators for these classifiers. The results demonstrate that our method shows competitive performance against other methods.

  17. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  18. Fragment assignment in the cloud with eXpress-D

    Science.gov (United States)

    2013-01-01

    Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033

  19. Probabilistic Assessment of Structural Seismic Damage for Buildings in Mid-America

    International Nuclear Information System (INIS)

    Bai, Jong-Wha; Hueste, Mary Beth D.; Gardoni, Paolo

    2008-01-01

    This paper provides an approach to conduct a probabilistic assessment of structural damage due to seismic events with an application to typical building structures in Mid-America. The developed methodology includes modified damage state classifications based on the ATC-13 and ATC-38 damage states and the ATC-38 database of building damage. Damage factors are assigned to each damage state to quantify structural damage as a percentage of structural replacement cost. To account for the inherent uncertainties, these factors are expressed as random variables with a Beta distribution. A set of fragility curves, quantifying the structural vulnerability of a building, is mapped onto the developed methodology to determine the expected structural damage. The total structural damage factor for a given seismic intensity is then calculated using a probabilistic approach. Prediction and confidence bands are also constructed to account for the prevailing uncertainties. The expected seismic structural damage is assessed for a typical building structure in the Mid-America region using the developed methodology. The developed methodology provides a transparent procedure, where the structural damage factors can be updated as additional seismic damage data becomes available

  20. POPPER, a simple programming language for probabilistic semantic inference in medicine.

    Science.gov (United States)

    Robson, Barry

    2015-01-01

    Our previous reports described the use of the Hyperbolic Dirac Net (HDN) as a method for probabilistic inference from medical data, and a proposed probabilistic medical Semantic Web (SW) language Q-UEL to provide that data. Rather like a traditional Bayes Net, that HDN provided estimates of joint and conditional probabilities, and was static, with no need for evolution due to "reasoning". Use of the SW will require, however, (a) at least the semantic triple with more elaborate relations than conditional ones, as seen in use of most verbs and prepositions, and (b) rules for logical, grammatical, and definitional manipulation that can generate changes in the inference net. Here is described the simple POPPER language for medical inference. It can be automatically written by Q-UEL, or by hand. Based on studies with our medical students, it is believed that a tool like this may help in medical education and that a physician unfamiliar with SW science can understand it. It is here used to explore the considerable challenges of assigning probabilities, and not least what the meaning and utility of inference net evolution would be for a physician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Bisimulations meet PCTL equivalences for probabilistic automata

    DEFF Research Database (Denmark)

    Song, Lei; Zhang, Lijun; Godskesen, Jens Chr.

    2013-01-01

    Probabilistic automata (PAs) have been successfully applied in formal verification of concurrent and stochastic systems. Efficient model checking algorithms have been studied, where the most often used logics for expressing properties are based on probabilistic computation tree logic (PCTL) and its...

  2. Building a high-resolution T2-weighted MR-based probabilistic model of tumor occurrence in the prostate.

    Science.gov (United States)

    Nagarajan, Mahesh B; Raman, Steven S; Lo, Pechin; Lin, Wei-Chan; Khoshnoodi, Pooria; Sayre, James W; Ramakrishna, Bharath; Ahuja, Preeti; Huang, Jiaoti; Margolis, Daniel J A; Lu, David S K; Reiter, Robert E; Goldin, Jonathan G; Brown, Matthew S; Enzmann, Dieter R

    2018-02-19

    We present a method for generating a T2 MR-based probabilistic model of tumor occurrence in the prostate to guide the selection of anatomical sites for targeted biopsies and serve as a diagnostic tool to aid radiological evaluation of prostate cancer. In our study, the prostate and any radiological findings within were segmented retrospectively on 3D T2-weighted MR images of 266 subjects who underwent radical prostatectomy. Subsequent histopathological analysis determined both the ground truth and the Gleason grade of the tumors. A randomly chosen subset of 19 subjects was used to generate a multi-subject-derived prostate template. Subsequently, a cascading registration algorithm involving both affine and non-rigid B-spline transforms was used to register the prostate of every subject to the template. Corresponding transformation of radiological findings yielded a population-based probabilistic model of tumor occurrence. The quality of our probabilistic model building approach was statistically evaluated by measuring the proportion of correct placements of tumors in the prostate template, i.e., the number of tumors that maintained their anatomical location within the prostate after their transformation into the prostate template space. Probabilistic model built with tumors deemed clinically significant demonstrated a heterogeneous distribution of tumors, with higher likelihood of tumor occurrence at the mid-gland anterior transition zone and the base-to-mid-gland posterior peripheral zones. Of 250 MR lesions analyzed, 248 maintained their original anatomical location with respect to the prostate zones after transformation to the prostate. We present a robust method for generating a probabilistic model of tumor occurrence in the prostate that could aid clinical decision making, such as selection of anatomical sites for MR-guided prostate biopsies.

  3. Probabilistic Routing Based on Two-Hop Information in Delay/Disruption Tolerant Networks

    Directory of Open Access Journals (Sweden)

    Xu Wang

    2015-01-01

    Full Text Available We investigate an opportunistic routing protocol in delay/disruption tolerant networks (DTNs where the end-to-end path between source and destination nodes may not exist for most of the time. Probabilistic routing protocol using history of encounters and transitivity (PRoPHET is an efficient history-based routing protocol specifically proposed for DTNs, which only utilizes the delivery predictability of one-hop neighbors to make a decision for message forwarding. In order to further improve the message delivery rate and to reduce the average overhead of PRoPHET, in this paper we propose an improved probabilistic routing algorithm (IPRA, where the history information of contacts for the immediate encounter and two-hop neighbors has been jointly used to make an informed decision for message forwarding. Based on the Opportunistic Networking Environment (ONE simulator, the performance of IPRA has been evaluated via extensive simulations. The results show that IPRA can significantly improve the average delivery rate while achieving a better or comparable performance with respect to average overhead, average delay, and total energy consumption compared with the existing algorithms.

  4. Assigning breed origin to alleles in crossbred animals.

    Science.gov (United States)

    Vandenplas, Jérémie; Calus, Mario P L; Sevillano, Claudia A; Windig, Jack J; Bastiaansen, John W M

    2016-08-22

    For some species, animal production systems are based on the use of crossbreeding to take advantage of the increased performance of crossbred compared to purebred animals. Effects of single nucleotide polymorphisms (SNPs) may differ between purebred and crossbred animals for several reasons: (1) differences in linkage disequilibrium between SNP alleles and a quantitative trait locus; (2) differences in genetic backgrounds (e.g., dominance and epistatic interactions); and (3) differences in environmental conditions, which result in genotype-by-environment interactions. Thus, SNP effects may be breed-specific, which has led to the development of genomic evaluations for crossbred performance that take such effects into account. However, to estimate breed-specific effects, it is necessary to know breed origin of alleles in crossbred animals. Therefore, our aim was to develop an approach for assigning breed origin to alleles of crossbred animals (termed BOA) without information on pedigree and to study its accuracy by considering various factors, including distance between breeds. The BOA approach consists of: (1) phasing genotypes of purebred and crossbred animals; (2) assigning breed origin to phased haplotypes; and (3) assigning breed origin to alleles of crossbred animals based on a library of assigned haplotypes, the breed composition of crossbred animals, and their SNP genotypes. The accuracy of allele assignments was determined for simulated datasets that include crosses between closely-related, distantly-related and unrelated breeds. Across these scenarios, the percentage of alleles of a crossbred animal that were correctly assigned to their breed origin was greater than 90 %, and increased with increasing distance between breeds, while the percentage of incorrectly assigned alleles was always less than 2 %. For the remaining alleles, i.e. 0 to 10 % of all alleles of a crossbred animal, breed origin could not be assigned. The BOA approach accurately assigns

  5. Target assignment for security officers to K targets (TASK)

    International Nuclear Information System (INIS)

    Rowland, J.R.; Shelton, K.W.; Stunkel, C.B.

    1983-02-01

    A probabilistic algorithm is developed to provide an optimal Target Assignment for Security officers to K targets (TASK) using a maximin criterion. Under the assumption of only a limited number (N) of security officers, the TASK computer model determines deployment assignments which maximize the system protection against sabotage by an adversary who may select any link in the system, including the weakest, for the point of attack. Applying the TASK model to a hypothetical nuclear facility containing a nine-level building reveals that aggregate targets covering multiple vital areas should be utilized to reduce the number of possible target assignments to a value equal to or only slightly larger than N. The increased probability that a given aggregate target is covered by one or more security officers offsets the slight decrease in interruption probability due to its occurring earlier in the adversary's path. In brief, the TASK model determines the optimal maximin deployment strategy for limited numbers of security officers and calculates a quantitative measure of the resulting system protection

  6. Application of genetic algorithms to the maintenance scheduling optimization in a nuclear system basing on reliability

    International Nuclear Information System (INIS)

    Lapa, Celso M. Franklin; Pereira, Claudio M.N.A.; Mol, Antonio C. de Abreu

    1999-01-01

    This paper presents a solution based on genetic algorithm and probabilistic safety analysis that can be applied in the optimization of the preventive maintenance politic of nuclear power plant safety systems. The goal of this approach is to improve the average availability of the system through the optimization of the preventive maintenance scheduling politic. The auxiliary feed water system of a two loops pressurized water reactor is used as a sample case, in order to demonstrate the effectiveness of the proposed method. The results, when compared to those obtained by some standard maintenance politics, reveal quantitative gains and operational safety levels. (author)

  7. PROBABILISTIC SEISMIC ASSESSMENT OF BASE-ISOLATED NPPS SUBJECTED TO STRONG GROUND MOTIONS OF TOHOKU EARTHQUAKE

    Directory of Open Access Journals (Sweden)

    AHMER ALI

    2014-10-01

    Full Text Available The probabilistic seismic performance of a standard Korean nuclear power plant (NPP with an idealized isolation is investigated in the present work. A probabilistic seismic hazard analysis (PSHA of the Wolsong site on the Korean peninsula is performed by considering peak ground acceleration (PGA as an earthquake intensity measure. A procedure is reported on the categorization and selection of two sets of ground motions of the Tohoku earthquake, i.e. long-period and common as Set A and Set B respectively, for the nonlinear time history response analysis of the base-isolated NPP. Limit state values as multiples of the displacement responses of the NPP base isolation are considered for the fragility estimation. The seismic risk of the NPP is further assessed by incorporation of the rate of frequency exceedance and conditional failure probability curves. Furthermore, this framework attempts to show the unacceptable performance of the isolated NPP in terms of the probabilistic distribution and annual probability of limit states. The comparative results for long and common ground motions are discussed to contribute to the future safety of nuclear facilities against drastic events like Tohoku.

  8. Probabilistic seismic assessment of base-isolated NPPs subjected to strong ground motions of Tohoku earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Ali, Ahmer; Hayah, Nadin Abu; Kim, Doo Kie [Dept. of Civil and Environmental Engineering, Kunsan National University, Kunsan (Korea, Republic of); Cho, Sung Gook [R and D Center, JACE KOREA Company, Gyeonggido (Korea, Republic of)

    2014-10-15

    The probabilistic seismic performance of a standard Korean nuclear power plant (NPP) with an idealized isolation is investigated in the present work. A probabilistic seismic hazard analysis (PSHA) of the Wolsong site on the Korean peninsula is performed by considering peak ground acceleration (PGA) as an earthquake intensity measure. A procedure is reported on the categorization and selection of two sets of ground motions of the Tohoku earthquake, i.e. long-period and common as Set A and Set B respectively, for the nonlinear time history response analysis of the base-isolated NPP. Limit state values as multiples of the displacement responses of the NPP base isolation are considered for the fragility estimation. The seismic risk of the NPP is further assessed by incorporation of the rate of frequency exceedance and conditional failure probability curves. Furthermore, this framework attempts to show the unacceptable performance of the isolated NPP in terms of the probabilistic distribution and annual probability of limit states. The comparative results for long and common ground motions are discussed to contribute to the future safety of nuclear facilities against drastic events like Tohoku.

  9. Probabilistic hypergraph based hash codes for social image search

    Institute of Scientific and Technical Information of China (English)

    Yi XIE; Hui-min YU; Roland HU

    2014-01-01

    With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.

  10. Shear-wave velocity-based probabilistic and deterministic assessment of seismic soil liquefaction potential

    Science.gov (United States)

    Kayen, R.; Moss, R.E.S.; Thompson, E.M.; Seed, R.B.; Cetin, K.O.; Der Kiureghian, A.; Tanaka, Y.; Tokimatsu, K.

    2013-01-01

    Shear-wave velocity (Vs) offers a means to determine the seismic resistance of soil to liquefaction by a fundamental soil property. This paper presents the results of an 11-year international project to gather new Vs site data and develop probabilistic correlations for seismic soil liquefaction occurrence. Toward that objective, shear-wave velocity test sites were identified, and measurements made for 301 new liquefaction field case histories in China, Japan, Taiwan, Greece, and the United States over a decade. The majority of these new case histories reoccupy those previously investigated by penetration testing. These new data are combined with previously published case histories to build a global catalog of 422 case histories of Vs liquefaction performance. Bayesian regression and structural reliability methods facilitate a probabilistic treatment of the Vs catalog for performance-based engineering applications. Where possible, uncertainties of the variables comprising both the seismic demand and the soil capacity were estimated and included in the analysis, resulting in greatly reduced overall model uncertainty relative to previous studies. The presented data set and probabilistic analysis also help resolve the ancillary issues of adjustment for soil fines content and magnitude scaling factors.

  11. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  12. Probabilistic insurance

    OpenAIRE

    Wakker, P.P.; Thaler, R.H.; Tversky, A.

    1997-01-01

    textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these preferences are intuitively appealing they are difficult to reconcile with expected utility theory. Under highly plausible assumptions about the utility function, willingness to pay for probabilistic i...

  13. Genetic diversity and population structure of the Guinea pig (Cavia porcellus, Rodentia, caviidae in Colombia

    Directory of Open Access Journals (Sweden)

    William Burgos-Paz

    2011-01-01

    Full Text Available The aim was to establish the genetic diversity and population structure of three guinea pig lines, from seven production zones located in Nariño, southwest Colombia. A total of 384 individuals were genotyped with six microsatellite markers. The measurement of intrapopulation diversity revealed allelic richness ranging from 3.0 to 6.56, and observed heterozygosity (Ho from 0.33 to 0.60, with a deficit in heterozygous individuals. Although statistically significant (p < 0.05, genetic differentiation between population pairs was found to be low. Genetic distance, as well as clustering of guinea-pig lines and populations, coincided with the historical and geographical distribution of the populations. Likewise, high genetic identity between improved and native lines was established. An analysis of group probabilistic assignment revealed that each line should not be considered as a genetically homogeneous group. The findings corroborate the absorption of native genetic material into the improved line introduced into Colombia from Peru. It is necessary to establish conservation programs for native-line individuals in Nariño, and control genealogical and production records in order to reduce the inbreeding values in the populations.

  14. Adaptive predictors based on probabilistic SVM for real time disruption mitigation on JET

    Science.gov (United States)

    Murari, A.; Lungaroni, M.; Peluso, E.; Gaudio, P.; Vega, J.; Dormido-Canto, S.; Baruzzo, M.; Gelfusa, M.; Contributors, JET

    2018-05-01

    Detecting disruptions with sufficient anticipation time is essential to undertake any form of remedial strategy, mitigation or avoidance. Traditional predictors based on machine learning techniques can be very performing, if properly optimised, but do not provide a natural estimate of the quality of their outputs and they typically age very quickly. In this paper a new set of tools, based on probabilistic extensions of support vector machines (SVM), are introduced and applied for the first time to JET data. The probabilistic output constitutes a natural qualification of the prediction quality and provides additional flexibility. An adaptive training strategy ‘from scratch’ has also been devised, which allows preserving the performance even when the experimental conditions change significantly. Large JET databases of disruptions, covering entire campaigns and thousands of discharges, have been analysed, both for the case of the graphite and the ITER Like Wall. Performance significantly better than any previous predictor using adaptive training has been achieved, satisfying even the requirements of the next generation of devices. The adaptive approach to the training has also provided unique information about the evolution of the operational space. The fact that the developed tools give the probability of disruption improves the interpretability of the results, provides an estimate of the predictor quality and gives new insights into the physics. Moreover, the probabilistic treatment permits to insert more easily these classifiers into general decision support and control systems.

  15. Parentage assignment of progeny in mixed milt fertilization of ...

    African Journals Online (AJOL)

    Administrator

    2011-06-13

    Jun 13, 2011 ... individuals. Overall, 98.8% of progeny were assigned to their parents using Family Assignment. Program (FAP). Selection of hyper-variable microsatellites in Caspian brown trout to identify unique alleles was effective for unambiguous parentage determination and estimation of genetic diversity in this study.

  16. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos; Chaudhuri, Siddhartha; Koller, Daphne; Koltun, Vladlen

    2012-01-01

    represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation

  17. Comparison of Control Approaches in Genetic Regulatory Networks by Using Stochastic Master Equation Models, Probabilistic Boolean Network Models and Differential Equation Models and Estimated Error Analyzes

    Science.gov (United States)

    Caglar, Mehmet Umut; Pal, Ranadip

    2011-03-01

    Central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid''. However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of cell level data and probabilistic - nonlinear nature of interactions. Several models widely used to analyze and simulate these types of nonlinear interactions. Stochastic Master Equation (SME) models give probabilistic nature of the interactions in a detailed manner, with a high calculation cost. On the other hand Probabilistic Boolean Network (PBN) models give a coarse scale picture of the stochastic processes, with a less calculation cost. Differential Equation (DE) models give the time evolution of mean values of processes in a highly cost effective way. The understanding of the relations between the predictions of these models is important to understand the reliability of the simulations of genetic regulatory networks. In this work the success of the mapping between SME, PBN and DE models is analyzed and the accuracy and affectivity of the control policies generated by using PBN and DE models is compared.

  18. Application of probabilistic precipitation forecasts from a ...

    African Journals Online (AJOL)

    2014-02-14

    Feb 14, 2014 ... Application of probabilistic precipitation forecasts from a deterministic model ... aim of this paper is to investigate the increase in the lead-time of flash flood warnings of the SAFFG using probabilistic precipitation forecasts ... The procedure is applied to a real flash flood event and the ensemble-based.

  19. Integrated Deterministic-Probabilistic Safety Assessment Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.

    2014-02-01

    IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)

  20. Probabilistic approaches to accounting for data variability in the practical application of bioavailability in predicting aquatic risks from metals.

    Science.gov (United States)

    Ciffroy, Philippe; Charlatchka, Rayna; Ferreira, Daniel; Marang, Laura

    2013-07-01

    The biotic ligand model (BLM) theoretically enables the derivation of environmental quality standards that are based on true bioavailable fractions of metals. Several physicochemical variables (especially pH, major cations, dissolved organic carbon, and dissolved metal concentrations) must, however, be assigned to run the BLM, but they are highly variable in time and space in natural systems. This article describes probabilistic approaches for integrating such variability during the derivation of risk indexes. To describe each variable using a probability density function (PDF), several methods were combined to 1) treat censored data (i.e., data below the limit of detection), 2) incorporate the uncertainty of the solid-to-liquid partitioning of metals, and 3) detect outliers. From a probabilistic perspective, 2 alternative approaches that are based on log-normal and Γ distributions were tested to estimate the probability of the predicted environmental concentration (PEC) exceeding the predicted non-effect concentration (PNEC), i.e., p(PEC/PNEC>1). The probabilistic approach was tested on 4 real-case studies based on Cu-related data collected from stations on the Loire and Moselle rivers. The approach described in this article is based on BLM tools that are freely available for end-users (i.e., the Bio-Met software) and on accessible statistical data treatments. This approach could be used by stakeholders who are involved in risk assessments of metals for improving site-specific studies. Copyright © 2013 SETAC.

  1. Probabilistic costing of transmission services

    International Nuclear Information System (INIS)

    Wijayatunga, P.D.C.

    1992-01-01

    Costing of transmission services of electrical utilities is required for transactions involving the transport of energy over a power network. The calculation of these costs based on Short Run Marginal Costing (SRMC) is preferred over other methods proposed in the literature due to its economic efficiency. In the research work discussed here, the concept of probabilistic costing of use-of-system based on SRMC which emerges as a consequence of the uncertainties in a power system is introduced using two different approaches. The first approach, based on the Monte Carlo method, generates a large number of possible system states by simulating random variables in the system using pseudo random number generators. A second approach to probabilistic use-of-system costing is proposed based on numerical convolution and multi-area representation of the transmission network. (UK)

  2. Global Infrasound Association Based on Probabilistic Clutter Categorization

    Science.gov (United States)

    Arora, Nimar; Mialle, Pierrick

    2016-04-01

    The IDC advances its methods and continuously improves its automatic system for the infrasound technology. The IDC focuses on enhancing the automatic system for the identification of valid signals and the optimization of the network detection threshold by identifying ways to refine signal characterization methodology and association criteria. An objective of this study is to reduce the number of associated infrasound arrivals that are rejected from the automatic bulletins when generating the reviewed event bulletins. Indeed, a considerable number of signal detections are due to local clutter sources such as microbaroms, waterfalls, dams, gas flares, surf (ocean breaking waves) etc. These sources are either too diffuse or too local to form events. Worse still, the repetitive nature of this clutter leads to a large number of false event hypotheses due to the random matching of clutter at multiple stations. Previous studies, for example [1], have worked on categorization of clutter using long term trends on detection azimuth, frequency, and amplitude at each station. In this work we continue the same line of reasoning to build a probabilistic model of clutter that is used as part of NETVISA [2], a Bayesian approach to network processing. The resulting model is a fusion of seismic, hydroacoustic and infrasound processing built on a unified probabilistic framework. References: [1] Infrasound categorization Towards a statistics based approach. J. Vergoz, P. Gaillard, A. Le Pichon, N. Brachet, and L. Ceranna. ITW 2011 [2] NETVISA: Network Processing Vertically Integrated Seismic Analysis. N. S. Arora, S. Russell, and E. Sudderth. BSSA 2013

  3. Distributed Schemes for Crowdsourcing-Based Sensing Task Assignment in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Linbo Zhai

    2017-01-01

    Full Text Available Spectrum sensing is an important issue in cognitive radio networks. The unlicensed users can access the licensed wireless spectrum only when the licensed wireless spectrum is sensed to be idle. Since mobile terminals such as smartphones and tablets are popular among people, spectrum sensing can be assigned to these mobile intelligent terminals, which is called crowdsourcing method. Based on the crowdsourcing method, this paper studies the distributed scheme to assign spectrum sensing task to mobile terminals such as smartphones and tablets. Considering the fact that mobile terminals’ positions may influence the sensing results, a precise sensing effect function is designed for the crowdsourcing-based sensing task assignment. We aim to maximize the sensing effect function and cast this optimization problem to address crowdsensing task assignment in cognitive radio networks. This problem is difficult to be solved because the complexity of this problem increases exponentially with the growth in mobile terminals. To assign crowdsensing task, we propose four distributed algorithms with different transition probabilities and use a Markov chain to analyze the approximation gap of our proposed schemes. Simulation results evaluate the average performance of our proposed algorithms and validate the algorithm’s convergence.

  4. A probabilistic risk assessment for field radiography based on expert judgment and opinion

    International Nuclear Information System (INIS)

    Jang, Han-Ki; Ryu, Hyung-Joon; Kim, Ji-Young; Lee, Jai-Ki; Cho, Kun-Woo

    2011-01-01

    A probabilistic approach was applied to assess radiation risk associated with the field radiography using gamma sources. The Delphi method based on the expert judgments and opinions was used in the process of characterization of parameters affecting risk, which are inevitably subject to large uncertainties. A mathematical approach applying the Bayesian inferences was employed for data processing to improve the Delphi results. This process consists of three phases: (1) setting prior distributions, (2) constructing the likelihood functions and (3) deriving the posterior distributions based on the likelihood functions. The approach for characterizing input parameters using the Bayesian inference is provided for improved risk estimates without intentional rejection of part of the data, which demonstrated utility of Bayesian updating of distributions of uncertain input parameters in PRA (Probabilistic Risk Assessment). The data analysis portion for PRA in field radiography is addressed for estimates of the parameters used to determine the frequencies and consequences of the various events modeled. In this study, radiological risks for the worker and the public member in the vicinity of the work place are estimated for field radiography system in Korea based on two-dimensional Monte Carlo Analysis (2D MCA). (author)

  5. Incorporating linguistic, probabilistic, and possibilistic information in a risk-based approach for ranking contaminated sites.

    Science.gov (United States)

    Zhang, Kejiang; Achari, Gopal; Pei, Yuansheng

    2010-10-01

    Different types of uncertain information-linguistic, probabilistic, and possibilistic-exist in site characterization. Their representation and propagation significantly influence the management of contaminated sites. In the absence of a framework with which to properly represent and integrate these quantitative and qualitative inputs together, decision makers cannot fully take advantage of the available and necessary information to identify all the plausible alternatives. A systematic methodology was developed in the present work to incorporate linguistic, probabilistic, and possibilistic information into the Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), a subgroup of Multi-Criteria Decision Analysis (MCDA) methods for ranking contaminated sites. The identification of criteria based on the paradigm of comparative risk assessment provides a rationale for risk-based prioritization. Uncertain linguistic, probabilistic, and possibilistic information identified in characterizing contaminated sites can be properly represented as numerical values, intervals, probability distributions, and fuzzy sets or possibility distributions, and linguistic variables according to their nature. These different kinds of representation are first transformed into a 2-tuple linguistic representation domain. The propagation of hybrid uncertainties is then carried out in the same domain. This methodology can use the original site information directly as much as possible. The case study shows that this systematic methodology provides more reasonable results. © 2010 SETAC.

  6. Analyzing State Sequences with Probabilistic Suffix Trees: The PST R Package

    Directory of Open Access Journals (Sweden)

    Alexis Gabadinho

    2016-08-01

    Full Text Available This article presents the PST R package for categorical sequence analysis with probabilistic suffix trees (PSTs, i.e., structures that store variable-length Markov chains (VLMCs. VLMCs allow to model high-order dependencies in categorical sequences with parsimonious models based on simple estimation procedures. The package is specifically adapted to the field of social sciences, as it allows for VLMC models to be learned from sets of individual sequences possibly containing missing values; in addition, the package is extended to account for case weights. This article describes how a VLMC model is learned from one or more categorical sequences and stored in a PST. The PST can then be used for sequence prediction, i.e., to assign a probability to whole observed or artificial sequences. This feature supports data mining applications such as the extraction of typical patterns and outliers. This article also introduces original visualization tools for both the model and the outcomes of sequence prediction. Other features such as functions for pattern mining and artificial sequence generation are described as well. The PST package also allows for the computation of probabilistic divergence between two models and the fitting of segmented VLMCs, where sub-models fitted to distinct strata of the learning sample are stored in a single PST.

  7. Probabilistic Modeling of the Fatigue Crack Growth Rate for Ni-base Alloy X-750

    International Nuclear Information System (INIS)

    Yoon, J.Y.; Nam, H.O.; Hwang, I.S.; Lee, T.H.

    2012-01-01

    Extending the operating life of existing nuclear power plants (NPP's) beyond 60 years. Many aging problems of passive components such as PWSCC, IASCC, FAC and Corrosion Fatigue; Safety analysis: Deterministic analysis + Probabilistic analysis; Many uncertainties of parameters or relationship in general probabilistic analysis such as probabilistic safety assessment (PSA); Bayesian inference: Decreasing uncertainties by updating unknown parameter; Ensuring the reliability of passive components (e.g. pipes) as well as active components (e.g. valve, pump) in NPP's; Developing probabilistic model for failures; Updating the fatigue crack growth rate (FCGR)

  8. Optimisation of test and maintenance based on probabilistic methods

    International Nuclear Information System (INIS)

    Cepin, M.

    2001-01-01

    This paper presents a method, which based on models and results of probabilistic safety assessment, minimises the nuclear power plant risk by optimisation of arrangement of safety equipment outages. The test and maintenance activities of the safety equipment are timely arranged, so the classical static fault tree models are extended with the time requirements to be capable to model real plant states. A house event matrix is used, which enables modelling of the equipment arrangements through the discrete points of time. The result of the method is determination of such configuration of equipment outages, which result in the minimal risk. Minimal risk is represented by system unavailability. (authors)

  9. Probabilistic Structural Analysis of SSME Turbopump Blades: Probabilistic Geometry Effects

    Science.gov (United States)

    Nagpal, V. K.

    1985-01-01

    A probabilistic study was initiated to evaluate the precisions of the geometric and material properties tolerances on the structural response of turbopump blades. To complete this study, a number of important probabilistic variables were identified which are conceived to affect the structural response of the blade. In addition, a methodology was developed to statistically quantify the influence of these probabilistic variables in an optimized way. The identified variables include random geometric and material properties perturbations, different loadings and a probabilistic combination of these loadings. Influences of these probabilistic variables are planned to be quantified by evaluating the blade structural response. Studies of the geometric perturbations were conducted for a flat plate geometry as well as for a space shuttle main engine blade geometry using a special purpose code which uses the finite element approach. Analyses indicate that the variances of the perturbations about given mean values have significant influence on the response.

  10. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    Science.gov (United States)

    Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun

    2018-03-01

    Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  11. Probabilistic safety assessment as a standpoint for decision making

    International Nuclear Information System (INIS)

    Cepin, M.

    2001-01-01

    This paper focuses on the role of probabilistic safety assessment in decision-making. The prerequisites for use of the results of probabilistic safety assessment and the criteria for the decision-making based on probabilistic safety assessment are discussed. The decision-making process is described. It provides a risk evaluation of impact of the issue under investigation. Selected examples are discussed, which highlight the described process. (authors)

  12. Process for computing geometric perturbations for probabilistic analysis

    Science.gov (United States)

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  13. Short tandem repeat (STR based genetic diversity and relationship of indigenous Niger cattle

    Directory of Open Access Journals (Sweden)

    M. Grema

    2017-11-01

    Full Text Available The diversity of cattle in Niger is predominantly represented by three indigenous breeds: Zebu Arabe, Zebu Bororo and Kuri. This study aimed at characterizing the genetic diversity and relationship of Niger cattle breeds using short tandem repeat (STR marker variations. A total of 105 cattle from all three breeds were genotyped at 27 STR loci. High levels of allelic and gene diversity were observed with an overall mean of 8.7 and 0.724 respectively. The mean inbreeding estimate within breeds was found to be moderate with 0.024, 0.043 and 0.044 in Zebu Arabe, Zebu Bororo and Kuri cattle respectively. The global F statistics showed low genetic differentiation among Niger cattle with about 2.6 % of total variation being attributed to between-breed differences. Neighbor-joining tree derived from pairwise allele sharing distance revealed Zebu Arabe and Kuri clustering together while Zebu Bororo appeared to be relatively distinct from the other two breeds. High levels of admixture were evident from the distribution of pairwise inter-individual allele sharing distances that showed individuals across populations being more related than individuals within populations. Individuals were assigned to their respective source populations based on STR genotypes, and the percent correct assignment of Zebu Bororo (87.5 to 93.8 % was consistently higher than Zebu Arabe (59.3 to 70.4 % and Kuri (80.0 to 83.3 % cattle. The qualitative and quantitative tests for mutation drift equilibrium revealed absence of genetic bottleneck events in Niger cattle in the recent past. High genetic diversity and poor genetic structure among indigenous cattle breeds of Niger might be due to historic zebu–taurine admixture and ongoing breeding practices in the region. The results of the present study are expected to help in formulating effective strategies for conservation and genetic improvement of indigenous Niger cattle breeds.

  14. A common fixed point for operators in probabilistic normed spaces

    International Nuclear Information System (INIS)

    Ghaemi, M.B.; Lafuerza-Guillen, Bernardo; Razani, A.

    2009-01-01

    Probabilistic Metric spaces was introduced by Karl Menger. Alsina, Schweizer and Sklar gave a general definition of probabilistic normed space based on the definition of Menger [Alsina C, Schweizer B, Sklar A. On the definition of a probabilistic normed spaces. Aequationes Math 1993;46:91-8]. Here, we consider the equicontinuity of a class of linear operators in probabilistic normed spaces and finally, a common fixed point theorem is proved. Application to quantum Mechanic is considered.

  15. Identification of probabilistic approaches and map-based navigation ...

    Indian Academy of Sciences (India)

    B Madhevan

    2018-02-07

    Feb 7, 2018 ... consists of three processes: map learning (ML), localization and PP [73–76]. (i) ML ...... [83] Thrun S 2001 A probabilistic online mapping algorithm for teams of .... for target tracking using fuzzy logic controller in game theoretic ...

  16. Global optimization of maintenance and surveillance testing based on reliability and probabilistic safety assessment. Research project

    International Nuclear Information System (INIS)

    Martorell, S.; Serradell, V.; Munoz, A.; Sanchez, A.

    1997-01-01

    Background, objective, scope, detailed working plan and follow-up and final product of the project ''Global optimization of maintenance and surveillance testing based on reliability and probabilistic safety assessment'' are described

  17. An enhancement of selection and crossover operations in real-coded genetic algorithm for large-dimensionality optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Noh Sung; Lee, Jongsoo [Yonsei University, Seoul (Korea, Republic of)

    2016-01-15

    The present study aims to implement a new selection method and a novel crossover operation in a real-coded genetic algorithm. The proposed selection method facilitates the establishment of a successively evolved population by combining several subpopulations: an elitist subpopulation, an off-spring subpopulation and a mutated subpopulation. A probabilistic crossover is performed based on the measure of probabilistic distance between the individuals. The concept of ‘allowance’ is suggested to describe the level of variance in the crossover operation. A number of nonlinear/non-convex functions and engineering optimization problems are explored to verify the capacities of the proposed strategies. The results are compared with those obtained from other genetic and nature-inspired algorithms.

  18. Use of the LUS in sequence allele designations to facilitate probabilistic genotyping of NGS-based STR typing results.

    Science.gov (United States)

    Just, Rebecca S; Irwin, Jodi A

    2018-05-01

    Some of the expected advantages of next generation sequencing (NGS) for short tandem repeat (STR) typing include enhanced mixture detection and genotype resolution via sequence variation among non-homologous alleles of the same length. However, at the same time that NGS methods for forensic DNA typing have advanced in recent years, many caseworking laboratories have implemented or are transitioning to probabilistic genotyping to assist the interpretation of complex autosomal STR typing results. Current probabilistic software programs are designed for length-based data, and were not intended to accommodate sequence strings as the product input. Yet to leverage the benefits of NGS for enhanced genotyping and mixture deconvolution, the sequence variation among same-length products must be utilized in some form. Here, we propose use of the longest uninterrupted stretch (LUS) in allele designations as a simple method to represent sequence variation within the STR repeat regions and facilitate - in the nearterm - probabilistic interpretation of NGS-based typing results. An examination of published population data indicated that a reference LUS region is straightforward to define for most autosomal STR loci, and that using repeat unit plus LUS length as the allele designator can represent greater than 80% of the alleles detected by sequencing. A proof of concept study performed using a freely available probabilistic software demonstrated that the LUS length can be used in allele designations when a program does not require alleles to be integers, and that utilizing sequence information improves interpretation of both single-source and mixed contributor STR typing results as compared to using repeat unit information alone. The LUS concept for allele designation maintains the repeat-based allele nomenclature that will permit backward compatibility to extant STR databases, and the LUS lengths themselves will be concordant regardless of the NGS assay or analysis tools

  19. On Probabilistic Alpha-Fuzzy Fixed Points and Related Convergence Results in Probabilistic Metric and Menger Spaces under Some Pompeiu-Hausdorff-Like Probabilistic Contractive Conditions

    OpenAIRE

    De la Sen, M.

    2015-01-01

    In the framework of complete probabilistic metric spaces and, in particular, in probabilistic Menger spaces, this paper investigates some relevant properties of convergence of sequences to probabilistic α-fuzzy fixed points under some types of probabilistic contractive conditions.

  20. Dependencies, human interactions and uncertainties in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.

    1990-01-01

    In the context of Probabilistic Safety Assessment (PSA), three areas were investigated in a 4-year Nordic programme: dependencies with special emphasis on common cause failures, human interactions and uncertainty aspects. The approach was centered around comparative analyses in form of Benchmark/Reference Studies and retrospective reviews. Weak points in available PSAs were identified and recommendations were made aiming at improving consistency of the PSAs. The sensitivity of PSA-results to basic assumptions was demonstrated and the sensitivity to data assignment and to choices of methods for analysis of selected topics was investigated. (author)

  1. GUI program to compute probabilistic seismic hazard analysis

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Chi, H. C.; Cho, J. C.; Park, J. H.; Kim, K. G.; Im, I. S.

    2006-12-01

    The development of program to compute probabilistic seismic hazard is completed based on Graphic User Interface(GUI). The main program consists of three part - the data input processes, probabilistic seismic hazard analysis and result output processes. The probabilistic seismic hazard analysis needs various input data which represent attenuation formulae, seismic zoning map, and earthquake event catalog. The input procedure of previous programs based on text interface take a much time to prepare the data. The data cannot be checked directly on screen to prevent input erroneously in existing methods. The new program simplifies the input process and enable to check the data graphically in order to minimize the artificial error within limits of the possibility

  2. Probabilistic Insurance

    NARCIS (Netherlands)

    Wakker, P.P.; Thaler, R.H.; Tversky, A.

    1997-01-01

    Probabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in premium to compensate for a 1% default risk. These observations cannot be

  3. Probabilistic Insurance

    NARCIS (Netherlands)

    P.P. Wakker (Peter); R.H. Thaler (Richard); A. Tversky (Amos)

    1997-01-01

    textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these

  4. Probabilistic Design of Offshore Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1988-01-01

    Probabilistic design of structural systems is considered in this paper. The reliability is estimated using first-order reliability methods (FORM). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements...... satisfies given requirements or such that the systems reliability satisfies a given requirement. Based on a sensitivity analysis optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability-based optimization problem sequentially using quasi......-analytical derivatives. Finally an example of probabilistic design of an offshore structure is considered....

  5. Probabilistic Design of Offshore Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    Probabilistic design of structural systems is considered in this paper. The reliability is estimated using first-order reliability methods (FORM). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements...... satisfies given requirements or such that the systems reliability satisfies a given requirement. Based on a sensitivity analysis optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability-based optimization problem sequentially using quasi......-analytical derivatives. Finally an example of probabilistic design of an offshore structure is considered....

  6. Reliable allele detection using SNP-based PCR primers containing Locked Nucleic Acid: application in genetic mapping

    Directory of Open Access Journals (Sweden)

    Trognitz Friederike

    2007-02-01

    Full Text Available Abstract Background The diploid, Solanum caripense, a wild relative of potato and tomato, possesses valuable resistance to potato late blight and we are interested in the genetic base of this resistance. Due to extremely low levels of genetic variation within the S. caripense genome it proved impossible to generate a dense genetic map and to assign individual Solanum chromosomes through the use of conventional chromosome-specific SSR, RFLP, AFLP, as well as gene- or locus-specific markers. The ease of detection of DNA polymorphisms depends on both frequency and form of sequence variation. The narrow genetic background of close relatives and inbreds complicates the detection of persisting, reduced polymorphism and is a challenge to the development of reliable molecular markers. Nonetheless, monomorphic DNA fragments representing not directly usable conventional markers can contain considerable variation at the level of single nucleotide polymorphisms (SNPs. This can be used for the design of allele-specific molecular markers. The reproducible detection of allele-specific markers based on SNPs has been a technical challenge. Results We present a fast and cost-effective protocol for the detection of allele-specific SNPs by applying Sequence Polymorphism-Derived (SPD markers. These markers proved highly efficient for fingerprinting of individuals possessing a homogeneous genetic background. SPD markers are obtained from within non-informative, conventional molecular marker fragments that are screened for SNPs to design allele-specific PCR primers. The method makes use of primers containing a single, 3'-terminal Locked Nucleic Acid (LNA base. We demonstrate the applicability of the technique by successful genetic mapping of allele-specific SNP markers derived from monomorphic Conserved Ortholog Set II (COSII markers mapped to Solanum chromosomes, in S. caripense. By using SPD markers it was possible for the first time to map the S. caripense alleles

  7. An improved, bias-reduced probabilistic functional gene network of baker's yeast, Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Insuk Lee

    2007-10-01

    Full Text Available Probabilistic functional gene networks are powerful theoretical frameworks for integrating heterogeneous functional genomics and proteomics data into objective models of cellular systems. Such networks provide syntheses of millions of discrete experimental observations, spanning DNA microarray experiments, physical protein interactions, genetic interactions, and comparative genomics; the resulting networks can then be easily applied to generate testable hypotheses regarding specific gene functions and associations.We report a significantly improved version (v. 2 of a probabilistic functional gene network of the baker's yeast, Saccharomyces cerevisiae. We describe our optimization methods and illustrate their effects in three major areas: the reduction of functional bias in network training reference sets, the application of a probabilistic model for calculating confidences in pair-wise protein physical or genetic interactions, and the introduction of simple thresholds that eliminate many false positive mRNA co-expression relationships. Using the network, we predict and experimentally verify the function of the yeast RNA binding protein Puf6 in 60S ribosomal subunit biogenesis.YeastNet v. 2, constructed using these optimizations together with additional data, shows significant reduction in bias and improvements in precision and recall, in total covering 102,803 linkages among 5,483 yeast proteins (95% of the validated proteome. YeastNet is available from http://www.yeastnet.org.

  8. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  9. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  10. Probabilistic Modeling of Timber Structures

    DEFF Research Database (Denmark)

    Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2005-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...... proposal is based on discussions and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for components and connections. The recommended...

  11. Comparative study of probabilistic methodologies for small signal stability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rueda, J.L.; Colome, D.G. [Universidad Nacional de San Juan (IEE-UNSJ), San Juan (Argentina). Inst. de Energia Electrica], Emails: joseluisrt@iee.unsj.edu.ar, colome@iee.unsj.edu.ar

    2009-07-01

    Traditional deterministic approaches for small signal stability assessment (SSSA) are unable to properly reflect the existing uncertainties in real power systems. Hence, the probabilistic analysis of small signal stability (SSS) is attracting more attention by power system engineers. This paper discusses and compares two probabilistic methodologies for SSSA, which are based on the two point estimation method and the so-called Monte Carlo method, respectively. The comparisons are based on the results obtained for several power systems of different sizes and with different SSS performance. It is demonstrated that although with an analytical approach the amount of computation of probabilistic SSSA can be reduced, the different degrees of approximations that are adopted, lead to deceptive results. Conversely, Monte Carlo based probabilistic SSSA can be carried out with reasonable computational effort while holding satisfactory estimation precision. (author)

  12. Overview of Future of Probabilistic Methods and RMSL Technology and the Probabilistic Methods Education Initiative for the US Army at the SAE G-11 Meeting

    Science.gov (United States)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting sponsored by the Picatinny Arsenal during March 1-3, 2004 at Westin Morristown, will report progress on projects for probabilistic assessment of Army system and launch an initiative for probabilistic education. The meeting features several Army and industry Senior executives and Ivy League Professor to provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11s Probabilistic Methods Committee is to enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development.

  13. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  14. Midwifery students' evaluation of team-based academic assignments involving peer-marking.

    Science.gov (United States)

    Parratt, Jenny A; Fahy, Kathleen M; Hastie, Carolyn R

    2014-03-01

    Midwives should be skilled team workers in maternity units and in group practices. Poor teamwork skills are a significant cause of adverse maternity care outcomes. Despite Australian and International regulatory requirements that all midwifery graduates are competent in teamwork, the systematic teaching and assessment of teamwork skills is lacking in higher education. How do midwifery students evaluate participation in team-based academic assignments, which include giving and receiving peer feedback? First and third year Bachelor of Midwifery students who volunteered (24 of 56 students). Participatory Action Research with data collection via anonymous online surveys. There was general agreement that team based assignments; (i) should have peer-marking, (ii) help clarify what is meant by teamwork, (iii) develop communication skills, (iv) promote student-to-student learning. Third year students strongly agreed that teams: (i) are valuable preparation for teamwork in practice, (ii) help meet Australian midwifery competency 8, and (iii) were enjoyable. The majority of third year students agreed with statements that their teams were effectively coordinated and team members shared responsibility for work equally; first year students strongly disagreed with these statements. Students' qualitative comments substantiated and expanded on these findings. The majority of students valued teacher feedback on well-developed drafts of the team's assignment prior to marking. Based on these findings we changed practice and created more clearly structured team-based assignments with specific marking criteria. We are developing supporting lessons to teach specific teamwork skills: together these resources are called "TeamUP". TeamUP should be implemented in all pre-registration Midwifery courses to foster students' teamwork skills and readiness for practice. Copyright © 2013 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  15. Rationalization of some genetic anticodonic assignments

    Science.gov (United States)

    Lacey, J. C., Jr.; Hall, L. M.; Mullins, D. W., Jr.

    1985-01-01

    The hydrophobicity of most amino acids correlates well with that of their anticodon nucleotides, with Trp, Tyr, Ile, and Ser being the exceptions to this rule. Using previous data on hydrophobicity and binding constants, and new data on rates of esterification of polyadenylic acid with several N-acetylaminoacyl imidazolides, several of the anticodon assignments are rationalized. Chemical reasons are shown supporting the idea of the inclusion of the Ile in the catalog of biological amino acids late in the evolution, through a mutation of the existing tRNA and its aminoacyl-tRNA-synthetase. It was found that an addition of hexane increases the incorporation of hydrophobic Ac-Phe into poly-A, in support of the Fox (1965) and Oparin (1965) emphasis on the biogenetic importance of phase-separated systems.

  16. Probabilistic atlas based labeling of the cerebral vessel tree

    Science.gov (United States)

    Van de Giessen, Martijn; Janssen, Jasper P.; Brouwer, Patrick A.; Reiber, Johan H. C.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2015-03-01

    Preoperative imaging of the cerebral vessel tree is essential for planning therapy on intracranial stenoses and aneurysms. Usually, a magnetic resonance angiography (MRA) or computed tomography angiography (CTA) is acquired from which the cerebral vessel tree is segmented. Accurate analysis is helped by the labeling of the cerebral vessels, but labeling is non-trivial due to anatomical topological variability and missing branches due to acquisition issues. In recent literature, labeling the cerebral vasculature around the Circle of Willis has mainly been approached as a graph-based problem. The most successful method, however, requires the definition of all possible permutations of missing vessels, which limits application to subsets of the tree and ignores spatial information about the vessel locations. This research aims to perform labeling using probabilistic atlases that model spatial vessel and label likelihoods. A cerebral vessel tree is aligned to a probabilistic atlas and subsequently each vessel is labeled by computing the maximum label likelihood per segment from label-specific atlases. The proposed method was validated on 25 segmented cerebral vessel trees. Labeling accuracies were close to 100% for large vessels, but dropped to 50-60% for small vessels that were only present in less than 50% of the set. With this work we showed that using solely spatial information of the vessel labels, vessel segments from stable vessels (>50% presence) were reliably classified. This spatial information will form the basis for a future labeling strategy with a very loose topological model.

  17. Study on store-space assignment based on logistic AGV in e-commerce goods to person picking pattern

    Science.gov (United States)

    Xu, Lijuan; Zhu, Jie

    2017-10-01

    This paper studied on the store-space assignment based on logistic AGV in E-commerce goods to person picking pattern, and established the store-space assignment model based on the lowest picking cost, and design for store-space assignment algorithm after the cluster analysis based on similarity coefficient. And then through the example analysis, compared the picking cost between store-space assignment algorithm this paper design and according to item number and storage according to ABC classification allocation, and verified the effectiveness of the design of the store-space assignment algorithm.

  18. Integration of Probabilistic Exposure Assessment and Probabilistic Hazard Characterization

    NARCIS (Netherlands)

    Voet, van der H.; Slob, W.

    2007-01-01

    A method is proposed for integrated probabilistic risk assessment where exposure assessment and hazard characterization are both included in a probabilistic way. The aim is to specify the probability that a random individual from a defined (sub)population will have an exposure high enough to cause a

  19. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Multimedia

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  20. A Markov Chain Approach to Probabilistic Swarm Guidance

    Science.gov (United States)

    Acikmese, Behcet; Bayard, David S.

    2012-01-01

    This paper introduces a probabilistic guidance approach for the coordination of swarms of autonomous agents. The main idea is to drive the swarm to a prescribed density distribution in a prescribed region of the configuration space. In its simplest form, the probabilistic approach is completely decentralized and does not require communication or collabo- ration between agents. Agents make statistically independent probabilistic decisions based solely on their own state, that ultimately guides the swarm to the desired density distribution in the configuration space. In addition to being completely decentralized, the probabilistic guidance approach has a novel autonomous self-repair property: Once the desired swarm density distribution is attained, the agents automatically repair any damage to the distribution without collaborating and without any knowledge about the damage.

  1. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    differential evolution DE algorithm with adaptive crossover operator, .... x are assigned by using a sequential scheme which accounts for mean and ... the representative scenarios from probabilistic model based Monte Carlo ... Comparison of average convergence of MVMO-S with other metaheuristic optimization methods.

  2. Transitive probabilistic CLIR models.

    NARCIS (Netherlands)

    Kraaij, W.; de Jong, Franciska M.G.

    2004-01-01

    Transitive translation could be a useful technique to enlarge the number of supported language pairs for a cross-language information retrieval (CLIR) system in a cost-effective manner. The paper describes several setups for transitive translation based on probabilistic translation models. The

  3. Rule-based versus probabilistic selection for active surveillance using three definitions of insignificant prostate cancer

    NARCIS (Netherlands)

    L.D.F. Venderbos (Lionne); M.J. Roobol-Bouts (Monique); C.H. Bangma (Chris); R.C.N. van den Bergh (Roderick); L.P. Bokhorst (Leonard); D. Nieboer (Daan); Godtman, R; J. Hugosson (Jonas); van der Kwast, T; E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractTo study whether probabilistic selection by the use of a nomogram could improve patient selection for active surveillance (AS) compared to the various sets of rule-based AS inclusion criteria currently used. We studied Dutch and Swedish patients participating in the European Randomized

  4. Simulation-Based Dynamic Passenger Flow Assignment Modelling for a Schedule-Based Transit Network

    Directory of Open Access Journals (Sweden)

    Xiangming Yao

    2017-01-01

    Full Text Available The online operation management and the offline policy evaluation in complex transit networks require an effective dynamic traffic assignment (DTA method that can capture the temporal-spatial nature of traffic flows. The objective of this work is to propose a simulation-based dynamic passenger assignment framework and models for such applications in the context of schedule-based rail transit systems. In the simulation framework, travellers are regarded as individual agents who are able to obtain complete information on the current traffic conditions. A combined route selection model integrated with pretrip route selection and entrip route switch is established for achieving the dynamic network flow equilibrium status. The train agent is operated strictly with the timetable and its capacity limitation is considered. A continuous time-driven simulator based on the proposed framework and models is developed, whose performance is illustrated through a large-scale network of Beijing subway. The results indicate that more than 0.8 million individual passengers and thousands of trains can be simulated simultaneously at a speed ten times faster than real time. This study provides an efficient approach to analyze the dynamic demand-supply relationship for large schedule-based transit networks.

  5. Probabilistic Logical Characterization

    DEFF Research Database (Denmark)

    Hermanns, Holger; Parma, Augusto; Segala, Roberto

    2011-01-01

    Probabilistic automata exhibit both probabilistic and non-deterministic choice. They are therefore a powerful semantic foundation for modeling concurrent systems with random phenomena arising in many applications ranging from artificial intelligence, security, systems biology to performance...... modeling. Several variations of bisimulation and simulation relations have proved to be useful as means to abstract and compare different automata. This paper develops a taxonomy of logical characterizations of these relations on image-finite and image-infinite probabilistic automata....

  6. Probabilistic metric spaces

    CERN Document Server

    Schweizer, B

    2005-01-01

    Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.

  7. Application of probabilistically weighted graphs to image-based diagnosis of Alzheimer's disease using diffusion MRI

    Science.gov (United States)

    Maryam, Syeda; McCrackin, Laura; Crowley, Mark; Rathi, Yogesh; Michailovich, Oleg

    2017-03-01

    The world's aging population has given rise to an increasing awareness towards neurodegenerative disorders, including Alzheimers Disease (AD). Treatment options for AD are currently limited, but it is believed that future success depends on our ability to detect the onset of the disease in its early stages. The most frequently used tools for this include neuropsychological assessments, along with genetic, proteomic, and image-based diagnosis. Recently, the applicability of Diffusion Magnetic Resonance Imaging (dMRI) analysis for early diagnosis of AD has also been reported. The sensitivity of dMRI to the microstructural organization of cerebral tissue makes it particularly well-suited to detecting changes which are known to occur in the early stages of AD. Existing dMRI approaches can be divided into two broad categories: region-based and tract-based. In this work, we propose a new approach, which extends region-based approaches to the simultaneous characterization of multiple brain regions. Given a predefined set of features derived from dMRI data, we compute the probabilistic distances between different brain regions and treat the resulting connectivity pattern as an undirected, fully-connected graph. The characteristics of this graph are then used as markers to discriminate between AD subjects and normal controls (NC). Although in this preliminary work we omit subjects in the prodromal stage of AD, mild cognitive impairment (MCI), our method demonstrates perfect separability between AD and NC subject groups with substantial margin, and thus holds promise for fine-grained stratification of NC, MCI and AD populations.

  8. Probabilistic record linkage.

    Science.gov (United States)

    Sayers, Adrian; Ben-Shlomo, Yoav; Blom, Ashley W; Steele, Fiona

    2016-06-01

    Studies involving the use of probabilistic record linkage are becoming increasingly common. However, the methods underpinning probabilistic record linkage are not widely taught or understood, and therefore these studies can appear to be a 'black box' research tool. In this article, we aim to describe the process of probabilistic record linkage through a simple exemplar. We first introduce the concept of deterministic linkage and contrast this with probabilistic linkage. We illustrate each step of the process using a simple exemplar and describe the data structure required to perform a probabilistic linkage. We describe the process of calculating and interpreting matched weights and how to convert matched weights into posterior probabilities of a match using Bayes theorem. We conclude this article with a brief discussion of some of the computational demands of record linkage, how you might assess the quality of your linkage algorithm, and how epidemiologists can maximize the value of their record-linked research using robust record linkage methods. © The Author 2015; Published by Oxford University Press on behalf of the International Epidemiological Association.

  9. The Role of Language in Building Probabilistic Thinking

    Science.gov (United States)

    Nacarato, Adair Mendes; Grando, Regina Célia

    2014-01-01

    This paper is based on research that investigated the development of probabilistic language and thinking by students 10-12 years old. The focus was on the adequate use of probabilistic terms in social practice. A series of tasks was developed for the investigation and completed by the students working in groups. The discussions were video recorded…

  10. Bounding probabilistic safety assessment probabilities by reality

    International Nuclear Information System (INIS)

    Fragola, J.R.; Shooman, M.L.

    1991-01-01

    The investigation of the failure in systems where failure is a rare event makes the continual comparisons between the developed probabilities and empirical evidence difficult. The comparison of the predictions of rare event risk assessments with historical reality is essential to prevent probabilistic safety assessment (PSA) predictions from drifting into fantasy. One approach to performing such comparisons is to search out and assign probabilities to natural events which, while extremely rare, have a basis in the history of natural phenomena or human activities. For example the Segovian aqueduct and some of the Roman fortresses in Spain have existed for several millennia and in many cases show no physical signs of earthquake damage. This evidence could be used to bound the probability of earthquakes above a certain magnitude to less than 10 -3 per year. On the other hand, there is evidence that some repetitive actions can be performed with extremely low historical probabilities when operators are properly trained and motivated, and sufficient warning indicators are provided. The point is not that low probability estimates are impossible, but continual reassessment of the analysis assumptions, and a bounding of the analysis predictions by historical reality. This paper reviews the probabilistic predictions of PSA in this light, attempts to develop, in a general way, the limits which can be historically established and the consequent bounds that these limits place upon the predictions, and illustrates the methodology used in computing such limits. Further, the paper discusses the use of empirical evidence and the requirement for disciplined systematic approaches within the bounds of reality and the associated impact on PSA probabilistic estimates

  11. GUI program to compute probabilistic seismic hazard analysis

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Chi, H. C.; Cho, J. C.; Park, J. H.; Kim, K. G.; Im, I. S.

    2005-12-01

    The first stage of development of program to compute probabilistic seismic hazard is completed based on Graphic User Interface (GUI). The main program consists of three part - the data input processes, probabilistic seismic hazard analysis and result output processes. The first part has developed and others are developing now in this term. The probabilistic seismic hazard analysis needs various input data which represent attenuation formulae, seismic zoning map, and earthquake event catalog. The input procedure of previous programs based on text interface take a much time to prepare the data. The data cannot be checked directly on screen to prevent input erroneously in existing methods. The new program simplifies the input process and enable to check the data graphically in order to minimize the artificial error within the limits of the possibility

  12. High‐resolution stock discrimination of Atlantic herring (Clupea harengus) based on otolith shape, microstructure, and genetic markers

    DEFF Research Database (Denmark)

    Mosegaard, Henrik; Worsøe Clausen, Lotte; Bekkevold, Dorte

    2012-01-01

    between populations, which suggest genetic control as well. Thus otolith shape serves as a population marker, suitable for individual assignment. Here we use otolith morphological characteristics (otolith shape and larval otolith microstructure) combined with genetic markers to discriminate between...... otolith shape characteristics as separation parameters. Otolith shape was found to clearly discriminate between individuals at all ages from different spawning populations. The identified distances between populations based on otolith shape matched previously obtained genetic distances and were, when......One of the most rapidly developing applications of otolith research is shape analysis, often used for population discrimination as well as for species identification. Otolith shape is influenced by the environment through physiology, but also shows consistent and temporally stable differences...

  13. Do probabilistic forecasts lead to better decisions?

    Directory of Open Access Journals (Sweden)

    M. H. Ramos

    2013-06-01

    Full Text Available The last decade has seen growing research in producing probabilistic hydro-meteorological forecasts and increasing their reliability. This followed the promise that, supplied with information about uncertainty, people would take better risk-based decisions. In recent years, therefore, research and operational developments have also started focusing attention on ways of communicating the probabilistic forecasts to decision-makers. Communicating probabilistic forecasts includes preparing tools and products for visualisation, but also requires understanding how decision-makers perceive and use uncertainty information in real time. At the EGU General Assembly 2012, we conducted a laboratory-style experiment in which several cases of flood forecasts and a choice of actions to take were presented as part of a game to participants, who acted as decision-makers. Answers were collected and analysed. In this paper, we present the results of this exercise and discuss if we indeed make better decisions on the basis of probabilistic forecasts.

  14. Genetic algorithm approach to thin film optical parameters determination

    International Nuclear Information System (INIS)

    Jurecka, S.; Jureckova, M.; Muellerova, J.

    2003-01-01

    Optical parameters of thin film are important for several optical and optoelectronic applications. In this work the genetic algorithm proposed to solve optical parameters of thin film values. The experimental reflectance is modelled by the Forouhi - Bloomer dispersion relations. The refractive index, the extinction coefficient and the film thickness are the unknown parameters in this model. Genetic algorithm use probabilistic examination of promissing areas of the parameter space. It creates a population of solutions based on the reflectance model and then operates on the population to evolve the best solution by using selection, crossover and mutation operators on the population individuals. The implementation of genetic algorithm method and the experimental results are described too (Authors)

  15. Documentation design for probabilistic risk assessment

    International Nuclear Information System (INIS)

    Parkinson, W.J.; von Herrmann, J.L.

    1985-01-01

    This paper describes a framework for documentation design of probabilistic risk assessment (PRA) and is based on the EPRI document NP-3470 ''Documentation Design for Probabilistic Risk Assessment''. The goals for PRA documentation are stated. Four audiences are identified which PRA documentation must satisfy, and the documentation consistent with the needs of the various audiences are discussed, i.e., the Summary Report, the Executive Summary, the Main Report, and Appendices. The authors recommend the documentation specifications discussed herein as guides rather than rigid definitions

  16. Modifications of Probabilistic Safety Assessment-1 Nuclear Power Plant Dukovany based upon new version of Emergency Operating Procedures

    International Nuclear Information System (INIS)

    Aldorf, R.

    1997-01-01

    In the frame of 'living Probabilistic Safety Assessment-1 Nuclear Power Plant Dukovany Project' being performed by Nuclear Research Institute Rez during 1997 is planned to reflect on Probabilistic Safety Assessment-1 basis on impact of Emergency Response Guidelines (as one particular event from the list of other modifications) on Plant Safety. Following highlights help to orient the reader in main general aspects, findings and issues of the work that currently continues on. Older results of Probabilistic Safety Assessment-1 Nuclear Power Plant Dukovany have revealed that human behaviour during accident progression scenarios represent one of the most important aspects in plant safety. Current effort of Nuclear Power Plants Dukovany (Czech Republic) and Bohunice (Slovak Republic) is focussed on development of qualitatively new symptom-based Emergency Operating Procedures called Emergency Response Guidelines Supplier - Westinghouse Energy Systems Europe, Brussels works in cooperation with teams of specialist from both Nuclear Power Plants. In the frame of 'living Probabilistic Safety Assessment-1 Nuclear Power Plant Dukovany Project' being performed by Nuclear Research Institute Rez during 1997 is planned to prove on Probabilistic Safety Assessment -1 basis an expected - positive impact of Emergency Response Guidelines on Plant Safety, Since this contract is currently still in progress, it is possible to release only preliminary conclusions and observations. Emergency Response Guidelines compare to original Emergency Operating Procedures substantially reduce uncertainty of general human behaviour during plant response to an accident process. It is possible to conclude that from the current scope Probabilistic Safety Assessment Dukovany point of view (until core damage), Emergency Response Guidelines represent adequately wide basis for mitigating any initiating event

  17. A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.

    Science.gov (United States)

    Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang

    2011-07-01

    The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.

  18. Invariant and semi-invariant probabilistic normed spaces

    Energy Technology Data Exchange (ETDEWEB)

    Ghaemi, M.B. [School of Mathematics Iran, University of Science and Technology, Narmak, Tehran (Iran, Islamic Republic of)], E-mail: mghaemi@iust.ac.ir; Lafuerza-Guillen, B. [Departamento de Estadistica y Matematica Aplicada, Universidad de Almeria, Almeria E-04120 (Spain)], E-mail: blafuerz@ual.es; Saiedinezhad, S. [School of Mathematics Iran, University of Science and Technology, Narmak, Tehran (Iran, Islamic Republic of)], E-mail: ssaiedinezhad@yahoo.com

    2009-10-15

    Probabilistic metric spaces were introduced by Karl Menger. Alsina, Schweizer and Sklar gave a general definition of probabilistic normed space based on the definition of Menger . We introduce the concept of semi-invariance among the PN spaces. In this paper we will find a sufficient condition for some PN spaces to be semi-invariant. We will show that PN spaces are normal spaces. Urysohn's lemma, and Tietze extension theorem for them are proved.

  19. A probabilistic design method for LMFBR fuel rods

    International Nuclear Information System (INIS)

    Peck, S.O.; Lovejoy, W.S.

    1977-01-01

    Fuel rod performance analyses for design purposes are dependent upon material properties, dimensions, and loads that are statistical in nature. Conventional design practice accounts for the uncertainties in relevant parameters by designing to a 'safety factor', set so as to assure safe operation. Arbitrary assignment of these safety factors, based upon a number of 'worst case' assumptions, may result in costly over-design. Probabilistic design methods provide a systematic way to reflect the uncertainties in design parameters. PECS-III is a computer code which employs Monte Carlo techniques to generate the probability density and distribution functions for time-to-failure and cumulative damage for sealed plenum LMFBR fuel rods on a single rod or whole core basis. In Monte Carlo analyses, a deterministic model (that maps single-valued inputs into single-valued outputs) is coupled to a statistical 'driver'. Uncertainties in the input are reflected by assigning probability densities to the input parameters. Dependent input variables are considered multivariate normal. Independent input variables may be arbitrarily distributed. Sample values are drawn from these input densities, and a complete analysis is done by the deterministic model to generate a sample point in the output distribution. This process is repeated many times, and the number of times each output value occurs is accumulated. The probability that some measure of rod performance will fall within given limits is estimated by the relative frequency with which the Monte Carlo samples fall within tho

  20. Disjunctive Probabilistic Modal Logic is Enough for Bisimilarity on Reactive Probabilistic Systems

    OpenAIRE

    Bernardo, Marco; Miculan, Marino

    2016-01-01

    Larsen and Skou characterized probabilistic bisimilarity over reactive probabilistic systems with a logic including true, negation, conjunction, and a diamond modality decorated with a probabilistic lower bound. Later on, Desharnais, Edalat, and Panangaden showed that negation is not necessary to characterize the same equivalence. In this paper, we prove that the logical characterization holds also when conjunction is replaced by disjunction, with negation still being not necessary. To this e...

  1. A Probabilistic Model of Social Working Memory for Information Retrieval in Social Interactions.

    Science.gov (United States)

    Li, Liyuan; Xu, Qianli; Gan, Tian; Tan, Cheston; Lim, Joo-Hwee

    2018-05-01

    Social working memory (SWM) plays an important role in navigating social interactions. Inspired by studies in psychology, neuroscience, cognitive science, and machine learning, we propose a probabilistic model of SWM to mimic human social intelligence for personal information retrieval (IR) in social interactions. First, we establish a semantic hierarchy as social long-term memory to encode personal information. Next, we propose a semantic Bayesian network as the SWM, which integrates the cognitive functions of accessibility and self-regulation. One subgraphical model implements the accessibility function to learn the social consensus about IR-based on social information concept, clustering, social context, and similarity between persons. Beyond accessibility, one more layer is added to simulate the function of self-regulation to perform the personal adaptation to the consensus based on human personality. Two learning algorithms are proposed to train the probabilistic SWM model on a raw dataset of high uncertainty and incompleteness. One is an efficient learning algorithm of Newton's method, and the other is a genetic algorithm. Systematic evaluations show that the proposed SWM model is able to learn human social intelligence effectively and outperforms the baseline Bayesian cognitive model. Toward real-world applications, we implement our model on Google Glass as a wearable assistant for social interaction.

  2. Probabilistic Structural Analysis Program

    Science.gov (United States)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  3. Decision making by hybrid probabilistic: Possibilistic utility theory

    Directory of Open Access Journals (Sweden)

    Pap Endre

    2009-01-01

    Full Text Available It is presented an approach to decision theory based upon nonprobabilistic uncertainty. There is an axiomatization of the hybrid probabilistic possibilistic mixtures based on a pair of triangular conorm and triangular norm satisfying restricted distributivity law, and the corresponding non-additive Smeasure. This is characterized by the families of operations involved in generalized mixtures, based upon a previous result on the characterization of the pair of continuous t-norm and t-conorm such that the former is restrictedly distributive over the latter. The obtained family of mixtures combines probabilistic and idempotent (possibilistic mixtures via a threshold.

  4. Probabilistic Anomaly Detection Based On System Calls Analysis

    Directory of Open Access Journals (Sweden)

    Przemysław Maciołek

    2007-01-01

    Full Text Available We present an application of probabilistic approach to the anomaly detection (PAD. Byanalyzing selected system calls (and their arguments, the chosen applications are monitoredin the Linux environment. This allows us to estimate “(abnormality” of their behavior (bycomparison to previously collected profiles. We’ve attached results of threat detection ina typical computer environment.

  5. Generalized probabilistic scale space for image restoration.

    Science.gov (United States)

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  6. A high throughput single nucleotide polymorphism multiplex assay for parentage assignment in New Zealand sheep.

    Directory of Open Access Journals (Sweden)

    Shannon M Clarke

    Full Text Available Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development--firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage are assigned. An 84 "parentage SNP panel" was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams was absent, highlighting the SNP test's suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry.

  7. Probabilistic Damage Stability Calculations for Ships

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    1996-01-01

    The aim of these notes is to provide background material for the present probabilistic damage stability rules fro dry cargo ships.The formulas for the damage statistics are derived and shortcomings as well as possible improvements are discussed. The advantage of the definiton of fictitious...... compartments in the formulation of a computer-based general procedure for probabilistic damaged stability assessment is shown. Some comments are given on the current state of knowledge on the ship survivability in damaged conditions. Finally, problems regarding proper account of water ingress through openings...

  8. Quantum logic networks for probabilistic teleportation

    Institute of Scientific and Technical Information of China (English)

    刘金明; 张永生; 等

    2003-01-01

    By eans of the primitive operations consisting of single-qubit gates.two-qubit controlled-not gates,Von Neuman measurement and classically controlled operations.,we construct efficient quantum logic networks for implementing probabilistic teleportation of a single qubit,a two-particle entangled state,and an N-particle entanglement.Based on the quantum networks,we show that after the partially entangled states are concentrated into maximal entanglement,the above three kinds of probabilistic teleportation are the same as the standard teleportation using the corresponding maximally entangled states as the quantum channels.

  9. On the progress towards probabilistic basis for deterministic codes

    International Nuclear Information System (INIS)

    Ellyin, F.

    1975-01-01

    Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences

  10. A Probabilistic Analysis of the Sacco and Vanzetti Evidence

    CERN Document Server

    Kadane, Joseph B

    2011-01-01

    A Probabilistic Analysis of the Sacco and Vanzetti Evidence is a Bayesian analysis of the trial and post-trial evidence in the Sacco and Vanzetti case, based on subjectively determined probabilities and assumed relationships among evidential events. It applies the ideas of charting evidence and probabilistic assessment to this case, which is perhaps the ranking cause celebre in all of American legal history. Modern computation methods applied to inference networks are used to show how the inferential force of evidence in a complicated case can be graded. The authors employ probabilistic assess

  11. Probabilistic Tsunami Hazard Analysis

    Science.gov (United States)

    Thio, H. K.; Ichinose, G. A.; Somerville, P. G.; Polet, J.

    2006-12-01

    The recent tsunami disaster caused by the 2004 Sumatra-Andaman earthquake has focused our attention to the hazard posed by large earthquakes that occur under water, in particular subduction zone earthquakes, and the tsunamis that they generate. Even though these kinds of events are rare, the very large loss of life and material destruction caused by this earthquake warrant a significant effort towards the mitigation of the tsunami hazard. For ground motion hazard, Probabilistic Seismic Hazard Analysis (PSHA) has become a standard practice in the evaluation and mitigation of seismic hazard to populations in particular with respect to structures, infrastructure and lifelines. Its ability to condense the complexities and variability of seismic activity into a manageable set of parameters greatly facilitates the design of effective seismic resistant buildings but also the planning of infrastructure projects. Probabilistic Tsunami Hazard Analysis (PTHA) achieves the same goal for hazards posed by tsunami. There are great advantages of implementing such a method to evaluate the total risk (seismic and tsunami) to coastal communities. The method that we have developed is based on the traditional PSHA and therefore completely consistent with standard seismic practice. Because of the strong dependence of tsunami wave heights on bathymetry, we use a full waveform tsunami waveform computation in lieu of attenuation relations that are common in PSHA. By pre-computing and storing the tsunami waveforms at points along the coast generated for sets of subfaults that comprise larger earthquake faults, we can efficiently synthesize tsunami waveforms for any slip distribution on those faults by summing the individual subfault tsunami waveforms (weighted by their slip). This efficiency make it feasible to use Green's function summation in lieu of attenuation relations to provide very accurate estimates of tsunami height for probabilistic calculations, where one typically computes

  12. Probabilistic cellular automata.

    Science.gov (United States)

    Agapie, Alexandru; Andreica, Anca; Giuclea, Marius

    2014-09-01

    Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.

  13. Probabilistic evaluation method for axial capacity of single pile based on pile test information. Saika shiken kekka wo koryoshita kuienchoku shijiryoku no kakuritsuronteki hyokaho

    Energy Technology Data Exchange (ETDEWEB)

    Ishii, K.; Suzuki, M. (Shimizu Construction Co. Ltd., Tokyo (Japan)); Nakatani, S. (Ministry of Construction Tokyo (Japan)); Matsui, K. (CTI Engineering Co. Ltd., Tokyo (Japan))

    1991-12-20

    To consider the safety and economics in the design of pile, the reasonable evaluation on estimated accuracy from the accuracy of equation of pile capacity and probabilistic evaluation method is necessary. Therefore, the data analysis based on the collection and summary of the results from load tests of piles is one of powerful approach. In this study, selection of the parameters that cannot obtained from probabilistic model and load test and combination between statistical and experimental data by using Baysian probabilistic theory was examined. As the feature of this study, use of the design pile capacity equation based on the model of evaluation of pile capacity, consideration of the intrinsic difference between statistical data and results of load tests by using Baysian probabilistic theory and quantitative examination of applicability of the proposed method and the results of load tests are given. 24 refs., 5 figs., 7 tabs.

  14. Duplicate Detection in Probabilistic Data

    NARCIS (Netherlands)

    Panse, Fabian; van Keulen, Maurice; de Keijzer, Ander; Ritter, Norbert

    2009-01-01

    Collected data often contains uncertainties. Probabilistic databases have been proposed to manage uncertain data. To combine data from multiple autonomous probabilistic databases, an integration of probabilistic data has to be performed. Until now, however, data integration approaches have focused

  15. Probabilistic integrity assessment of pressure tubes in an operating pressurized heavy water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Young-Jin; Park, Heung-Bae [KEPCO E and C, 188 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-870 (Korea, Republic of); Lee, Jung-Min; Kim, Young-Jin [School of Mechanical Engineering, Sungkyunkwan University, 300 Chunchun-dong, Jangan-gu, Suwon-si, Gyeonggi-do 440-746 (Korea, Republic of); Ko, Han-Ok [Korea Institute of Nuclear Safety, 34 Gwahak-ro, Yuseong-gu, Daejeon-si 305-338 (Korea, Republic of); Chang, Yoon-Suk, E-mail: yschang@khu.ac.kr [Department of Nuclear Engineering, Kyung Hee University, 1 Seocheon-dong, Giheung-gu, Yongin-si, Gyeonggi-do 446-701 (Korea, Republic of)

    2012-02-15

    Even though pressure tubes are major components of a pressurized heavy water reactor (PHWR), only small proportions of pressure tubes are sampled for inspection due to limited inspection time and costs. Since the inspection scope and integrity evaluation have been treated by using a deterministic approach in general, a set of conservative data was used instead of all known information related to in-service degradation mechanisms because of inherent uncertainties in the examination. Recently, in order that pressure tube degradations identified in a sample of inspected pressure tubes are taken into account to address the balance of the uninspected ones in the reactor core, a probabilistic approach has been introduced. In the present paper, probabilistic integrity assessments of PHWR pressure tubes were carried out based on accumulated operating experiences and enhanced technology. Parametric analyses on key variables were conducted, which were periodically measured by in-service inspection program, such as deuterium uptake rate, dimensional change rate of pressure tube and flaw size distribution. Subsequently, a methodology to decide optimum statistical distribution by using a robust method adopting a genetic algorithm was proposed and applied to the most influential variable to verify the reliability of the proposed method. Finally, pros and cons of the alternative distributions comparing with corresponding ones derived from the traditional method as well as technical findings from the statistical assessment were discussed to show applicability to the probabilistic assessment of pressure tubes.

  16. Valid Probabilistic Predictions for Ginseng with Venn Machines Using Electronic Nose

    Directory of Open Access Journals (Sweden)

    You Wang

    2016-07-01

    Full Text Available In the application of electronic noses (E-noses, probabilistic prediction is a good way to estimate how confident we are about our prediction. In this work, a homemade E-nose system embedded with 16 metal-oxide semi-conductive gas sensors was used to discriminate nine kinds of ginsengs of different species or production places. A flexible machine learning framework, Venn machine (VM was introduced to make probabilistic predictions for each prediction. Three Venn predictors were developed based on three classical probabilistic prediction methods (Platt’s method, Softmax regression and Naive Bayes. Three Venn predictors and three classical probabilistic prediction methods were compared in aspect of classification rate and especially the validity of estimated probability. A best classification rate of 88.57% was achieved with Platt’s method in offline mode, and the classification rate of VM-SVM (Venn machine based on Support Vector Machine was 86.35%, just 2.22% lower. The validity of Venn predictors performed better than that of corresponding classical probabilistic prediction methods. The validity of VM-SVM was superior to the other methods. The results demonstrated that Venn machine is a flexible tool to make precise and valid probabilistic prediction in the application of E-nose, and VM-SVM achieved the best performance for the probabilistic prediction of ginseng samples.

  17. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors

    Directory of Open Access Journals (Sweden)

    Anxing Shan

    2017-05-01

    Full Text Available Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs. Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.

  18. Universal Generating Function Based Probabilistic Production Simulation Approach Considering Wind Speed Correlation

    Directory of Open Access Journals (Sweden)

    Yan Li

    2017-11-01

    Full Text Available Due to the volatile and correlated nature of wind speed, a high share of wind power penetration poses challenges to power system production simulation. Existing power system probabilistic production simulation approaches are in short of considering the time-varying characteristics of wind power and load, as well as the correlation between wind speeds at the same time, which brings about some problems in planning and analysis for the power system with high wind power penetration. Based on universal generating function (UGF, this paper proposes a novel probabilistic production simulation approach considering wind speed correlation. UGF is utilized to develop the chronological models of wind power that characterizes wind speed correlation simultaneously, as well as the chronological models of conventional generation sources and load. The supply and demand are matched chronologically to not only obtain generation schedules, but also reliability indices both at each simulation interval and the whole period. The proposed approach has been tested on the improved IEEE-RTS 79 test system and is compared with the Monte Carlo approach and the sequence operation theory approach. The results verified the proposed approach with the merits of computation simplicity and accuracy.

  19. Probabilistic Design of Wave Energy Devices

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kofoed, Jens Peter; Ferreira, C.B.

    2011-01-01

    Wave energy has a large potential for contributing significantly to production of renewable energy. However, the wave energy sector is still not able to deliver cost competitive and reliable solutions. But the sector has already demonstrated several proofs of concepts. The design of wave energy...... devices is a new and expanding technical area where there is no tradition for probabilistic design—in fact very little full scale devices has been build to date, so it can be said that no design tradition really exists in this area. For this reason it is considered to be of great importance to develop...... and advocate for a probabilistic design approach, as it is assumed (in other areas this has been demonstrated) that this leads to more economical designs compared to designs based on deterministic methods. In the present paper a general framework for probabilistic design and reliability analysis of wave energy...

  20. Impairment of probabilistic reward-based learning in schizophrenia.

    Science.gov (United States)

    Weiler, Julia A; Bellebaum, Christian; Brüne, Martin; Juckel, Georg; Daum, Irene

    2009-09-01

    Recent models assume that some symptoms of schizophrenia originate from defective reward processing mechanisms. Understanding the precise nature of reward-based learning impairments might thus make an important contribution to the understanding of schizophrenia and the development of treatment strategies. The present study investigated several features of probabilistic reward-based stimulus association learning, namely the acquisition of initial contingencies, reversal learning, generalization abilities, and the effects of reward magnitude. Compared to healthy controls, individuals with schizophrenia exhibited attenuated overall performance during acquisition, whereas learning rates across blocks were similar to the rates of controls. On the group level, persons with schizophrenia were, however, unable to learn the reversal of the initial reward contingencies. Exploratory analysis of only the subgroup of individuals with schizophrenia who showed significant learning during acquisition yielded deficits in reversal learning with low reward magnitudes only. There was further evidence of a mild generalization impairment of the persons with schizophrenia in an acquired equivalence task. In summary, although there was evidence of intact basic processing of reward magnitudes, individuals with schizophrenia were impaired at using this feedback for the adaptive guidance of behavior.

  1. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2017-11-01

    Full Text Available This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA, and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF is utilized to estimate the targets’ state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems.

  2. Solving stochastic multiobjective vehicle routing problem using probabilistic metaheuristic

    Directory of Open Access Journals (Sweden)

    Gannouni Asmae

    2017-01-01

    closed form expression. This novel approach is based on combinatorial probability and can be incorporated in a multiobjective evolutionary algorithm. (iiProvide probabilistic approaches to elitism and diversification in multiobjective evolutionary algorithms. Finally, The behavior of the resulting Probabilistic Multi-objective Evolutionary Algorithms (PrMOEAs is empirically investigated on the multi-objective stochastic VRP problem.

  3. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting

    KAUST Repository

    Wang, Jingyan

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights. © 2011 IEEE.

  4. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting

    KAUST Repository

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-01-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights. © 2011 IEEE.

  5. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    Science.gov (United States)

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  6. Probabilistic systems coalgebraically: A survey

    Science.gov (United States)

    Sokolova, Ana

    2011-01-01

    We survey the work on both discrete and continuous-space probabilistic systems as coalgebras, starting with how probabilistic systems are modeled as coalgebras and followed by a discussion of their bisimilarity and behavioral equivalence, mentioning results that follow from the coalgebraic treatment of probabilistic systems. It is interesting to note that, for different reasons, for both discrete and continuous probabilistic systems it may be more convenient to work with behavioral equivalence than with bisimilarity. PMID:21998490

  7. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2018-03-01

    Full Text Available Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  8. Probabilistic evaluation of design S-N curve and reliability assessment of ASME code-based evaluation

    International Nuclear Information System (INIS)

    Zhao Yongxiang

    1999-01-01

    A probabilistic evaluating approach of design S-N curve and a reliability assessment approach of the ASME code-based evaluation are presented on the basis of Langer S-N model-based P-S-N curves. The P-S-N curves are estimated by a so-called general maximum likelihood method. This method can be applied to deal with the virtual stress amplitude-crack initial life data which have a characteristics of double random variables. Investigation of a set of the virtual stress amplitude-crack initial life (S-N) data of 1Cr18Ni9Ti austenitic stainless steel-welded joint reveals that the P-S-N curves can give a good prediction of scatter regularity of the S-N data. Probabilistic evaluation of the design S-N curve with 0.9999 survival probability has considered various uncertainties, besides of the scatter of the S-N data, to an appropriate extent. The ASME code-based evaluation with 20 reduction factor on the mean life is much more conservative than that with 2 reduction factor on the stress amplitude. Evaluation of the latter in 666.61 MPa virtual stress amplitude is equivalent to 0.999522 survival probability and in 2092.18 MPa virtual stress amplitude equivalent to 0.9999999995 survival probability. This means that the evaluation in the low loading level may be non-conservative and in contrast, too conservative in the high loading level. Cause is that the reduction factors are constants and the factors can not take into account the general observation that scatter of the N data increases with the loading level decreasing. This has indicated that it is necessary to apply the probabilistic approach to the evaluation of design S-N curve

  9. Probabilistic full waveform inversion based on tectonic regionalization - development and application to the Australian upper mantle

    NARCIS (Netherlands)

    Käufl, P.; Fichtner, A.; Igel, H.

    2013-01-01

    We present a first study to investigate the feasibility of a probabilistic 3-D full waveform inversion based on spectral-element simulations of seismic wave propagation and Monte Carlo exploration of the model space. Through a tectonic regionalization we reduce the dimension of the model space to

  10. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  11. Conditional Probabilistic Population Forecasting

    OpenAIRE

    Sanderson, Warren C.; Scherbov, Sergei; O'Neill, Brian C.; Lutz, Wolfgang

    2004-01-01

    Since policy-makers often prefer to think in terms of alternative scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy-makers because...

  12. Genetic map of Triticum turgidum based on a hexaploid wheat population without genetic recombination for D genome

    Directory of Open Access Journals (Sweden)

    Zhang Li

    2012-08-01

    Full Text Available Abstract Background A synthetic doubled-haploid hexaploid wheat population, SynDH1, derived from the spontaneous chromosome doubling of triploid F1 hybrid plants obtained from the cross of hybrids Triticum turgidum ssp. durum line Langdon (LDN and ssp. turgidum line AS313, with Aegilops tauschii ssp. tauschii accession AS60, was previously constructed. SynDH1 is a tetraploidization-hexaploid doubled haploid (DH population because it contains recombinant A and B chromosomes from two different T. turgidum genotypes, while all the D chromosomes from Ae. tauschii are homogenous across the whole population. This paper reports the construction of a genetic map using this population. Results Of the 606 markers used to assemble the genetic map, 588 (97% were assigned to linkage groups. These included 513 Diversity Arrays Technology (DArT markers, 72 simple sequence repeat (SSR, one insertion site-based polymorphism (ISBP, and two high-molecular-weight glutenin subunit (HMW-GS markers. These markers were assigned to the 14 chromosomes, covering 2048.79 cM, with a mean distance of 3.48 cM between adjacent markers. This map showed good coverage of the A and B genome chromosomes, apart from 3A, 5A, 6A, and 4B. Compared with previously reported maps, most shared markers showed highly consistent orders. This map was successfully used to identify five quantitative trait loci (QTL, including two for spikelet number on chromosomes 7A and 5B, two for spike length on 7A and 3B, and one for 1000-grain weight on 4B. However, differences in crossability QTL between the two T. turgidum parents may explain the segregation distortion regions on chromosomes 1A, 3B, and 6B. Conclusions A genetic map of T. turgidum including 588 markers was constructed using a synthetic doubled haploid (SynDH hexaploid wheat population. Five QTLs for three agronomic traits were identified from this population. However, more markers are needed to increase the density and resolution of

  13. Genetic map of Triticum turgidum based on a hexaploid wheat population without genetic recombination for D genome.

    Science.gov (United States)

    Zhang, Li; Luo, Jiang-Tao; Hao, Ming; Zhang, Lian-Quan; Yuan, Zhong-Wei; Yan, Ze-Hong; Liu, Ya-Xi; Zhang, Bo; Liu, Bao-Long; Liu, Chun-Ji; Zhang, Huai-Gang; Zheng, You-Liang; Liu, Deng-Cai

    2012-08-13

    A synthetic doubled-haploid hexaploid wheat population, SynDH1, derived from the spontaneous chromosome doubling of triploid F1 hybrid plants obtained from the cross of hybrids Triticum turgidum ssp. durum line Langdon (LDN) and ssp. turgidum line AS313, with Aegilops tauschii ssp. tauschii accession AS60, was previously constructed. SynDH1 is a tetraploidization-hexaploid doubled haploid (DH) population because it contains recombinant A and B chromosomes from two different T. turgidum genotypes, while all the D chromosomes from Ae. tauschii are homogenous across the whole population. This paper reports the construction of a genetic map using this population. Of the 606 markers used to assemble the genetic map, 588 (97%) were assigned to linkage groups. These included 513 Diversity Arrays Technology (DArT) markers, 72 simple sequence repeat (SSR), one insertion site-based polymorphism (ISBP), and two high-molecular-weight glutenin subunit (HMW-GS) markers. These markers were assigned to the 14 chromosomes, covering 2048.79 cM, with a mean distance of 3.48 cM between adjacent markers. This map showed good coverage of the A and B genome chromosomes, apart from 3A, 5A, 6A, and 4B. Compared with previously reported maps, most shared markers showed highly consistent orders. This map was successfully used to identify five quantitative trait loci (QTL), including two for spikelet number on chromosomes 7A and 5B, two for spike length on 7A and 3B, and one for 1000-grain weight on 4B. However, differences in crossability QTL between the two T. turgidum parents may explain the segregation distortion regions on chromosomes 1A, 3B, and 6B. A genetic map of T. turgidum including 588 markers was constructed using a synthetic doubled haploid (SynDH) hexaploid wheat population. Five QTLs for three agronomic traits were identified from this population. However, more markers are needed to increase the density and resolution of this map in the future study.

  14. Probabilistic risk assessment in nuclear power plant regulation

    Energy Technology Data Exchange (ETDEWEB)

    Wall, J B

    1980-09-01

    A specific program is recommended to utilize more effectively probabilistic risk assessment in nuclear power plant regulation. It is based upon the engineering insights from the Reactor Safety Study (WASH-1400) and some follow-on risk assessment research by USNRC. The Three Mile Island accident is briefly discussed from a risk viewpoint to illustrate a weakness in current practice. The development of a probabilistic safety goal is recommended with some suggestions on underlying principles. Some ongoing work on risk perception and the draft probabilistic safety goal being reviewed on Canada is described. Some suggestions are offered on further risk assessment research. Finally, some recent U.S. Nuclear Regulatory Commission actions are described.

  15. Bus Timetabling as a Fuzzy Multiobjective Optimization Problem Using Preference-based Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-05-01

    Full Text Available Transportation plays a vital role in the development of a country and the car is the most commonly used means. However, in third world countries long waiting time for public buses is a common problem, especially when people need to switch buses. The problem becomes critical when one considers buses joining different villages and cities. Theoretically this problem can be solved by assigning more buses on the route, which is not possible due to economical problem. Another option is to schedule the buses so that customers who want to switch buses at junction cities need not have to wait long. This paper discusses how to model single frequency routes bus timetabling as a fuzzy multiobjective optimization problem and how to solve it using preference-based genetic algorithm by assigning appropriate fuzzy preference to the need of the customers. The idea will be elaborated with an example.

  16. FLEET ASSIGNMENT MODELLING

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available The article is devoted to the airline scheduling process and methods of its modeling. This article describes the main stages of airline scheduling process (scheduling, fleet assignment, revenue management, operations, their features and interactions. The main part of scheduling process is fleet assignment. The optimal solution of the fleet assignment problem enables airlines to increase their incomes up to 3 % due to quality improving of connections and execution of the planned number of flights operated by less number of aircraft than usual or planned earlier. Fleet assignment of scheduling process is examined and Conventional Leg-Based Fleet Assignment Model is analyzed. Finally strong and weak aspects of the model (SWOT are released and applied. The article gives a critical analysis of FAM model, with the purpose of identi- fying possible options and constraints of its use (for example, in cases of short-term and long-term planning, changing the schedule or replacing the aircraft, as well as possible ways to improve the model.

  17. Application of probabilistic risk based optimization approaches in environmental restoration

    International Nuclear Information System (INIS)

    Goldammer, W.

    1995-01-01

    The paper presents a general approach to site-specific risk assessments and optimization procedures. In order to account for uncertainties in the assessment of the current situation and future developments, optimization parameters are treated as probabilistic distributions. The assessments are performed within the framework of a cost-benefit analysis. Radiation hazards and conventional risks are treated within an integrated approach. Special consideration is given to consequences of low probability events such as, earthquakes or major floods. Risks and financial costs are combined to an overall figure of detriment allowing one to distinguish between benefits of available reclamation options. The probabilistic analysis uses a Monte Carlo simulation technique. The paper demonstrates the applicability of this approach in aiding the reclamation planning using an example from the German reclamation program for uranium mining and milling sites

  18. Conditional Probabilistic Population Forecasting

    OpenAIRE

    Sanderson, W.C.; Scherbov, S.; O'Neill, B.C.; Lutz, W.

    2003-01-01

    Since policy makers often prefer to think in terms of scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy makers it allows them to answer "what if"...

  19. Conditional probabilistic population forecasting

    OpenAIRE

    Sanderson, Warren; Scherbov, Sergei; O'Neill, Brian; Lutz, Wolfgang

    2003-01-01

    Since policy-makers often prefer to think in terms of alternative scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy-makers because it allows them...

  20. Solving probability reasoning based on DNA strand displacement and probability modules.

    Science.gov (United States)

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Cancer Screening and Genetics: A Tale of Two Paradigms

    OpenAIRE

    Hamilton, Jada G.; Edwards, Heather M.; Khoury, Muin J.; Taplin, Stephen H.

    2014-01-01

    The long-standing medical tradition to “first do no harm” is reflected in population-wide evidence-based recommendations for cancer screening tests that focus primarily on reducing morbidity and mortality. The conventional cancer screening process is predicated on finding early-stage disease that can be treated effectively; yet emerging genetic and genomic testing technologies have moved the target earlier in the disease development process to identify a probabilistic predisposition to diseas...

  2. Optimisation of timetable-based, stochastic transit assignment models based on MSA

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr

    2006-01-01

    (CRM), such a large-scale transit assignment model was developed and estimated. The Stochastic User Equilibrium problem was solved by the Method of Successive Averages (MSA). However, the model suffered from very large calculation times. The paper focuses on how to optimise transit assignment models...

  3. Feature selection for disruption prediction from scratch in JET by using genetic algorithms and probabilistic predictors

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Augusto, E-mail: augusto.pereira@ciemat.es [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Vega, Jesús; Moreno, Raúl [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Dormido-Canto, Sebastián [Dpto. Informática y Automática – UNED, Madrid (Spain); Rattá, Giuseppe A. [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Pavón, Fernando [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2015-10-15

    Recently, a probabilistic classifier has been developed at JET to be used as predictor from scratch. It has been applied to a database of 1237 JET ITER-like wall (ILW) discharges (of which 201 disrupted) with good results: success rate of 94% and false alarm rate of 4.21%. A combinatorial analysis between 14 features to ensure the selection of the best ones to achieve good enough results in terms of success rate and false alarm rate was performed. All possible combinations with a number of features between 2 and 7 were tested and 9893 different predictors were analyzed. An important drawback in this analysis was the time required to compute the results that can be estimated in 1731 h (∼2.4 months). Genetic algorithms (GA) are searching algorithms that simulate the process of natural selection. In this article, the GA and the Venn predictors are combined with the objective not only of finding good enough features within the 14 available ones but also of reducing the computational time requirements. Five different performance metrics as measures of the GA fitness function have been evaluated. The best metric was the measurement called Informedness, with just 6 generations (168 predictors at 29.4 h).

  4. Feature selection for disruption prediction from scratch in JET by using genetic algorithms and probabilistic predictors

    International Nuclear Information System (INIS)

    Pereira, Augusto; Vega, Jesús; Moreno, Raúl; Dormido-Canto, Sebastián; Rattá, Giuseppe A.; Pavón, Fernando

    2015-01-01

    Recently, a probabilistic classifier has been developed at JET to be used as predictor from scratch. It has been applied to a database of 1237 JET ITER-like wall (ILW) discharges (of which 201 disrupted) with good results: success rate of 94% and false alarm rate of 4.21%. A combinatorial analysis between 14 features to ensure the selection of the best ones to achieve good enough results in terms of success rate and false alarm rate was performed. All possible combinations with a number of features between 2 and 7 were tested and 9893 different predictors were analyzed. An important drawback in this analysis was the time required to compute the results that can be estimated in 1731 h (∼2.4 months). Genetic algorithms (GA) are searching algorithms that simulate the process of natural selection. In this article, the GA and the Venn predictors are combined with the objective not only of finding good enough features within the 14 available ones but also of reducing the computational time requirements. Five different performance metrics as measures of the GA fitness function have been evaluated. The best metric was the measurement called Informedness, with just 6 generations (168 predictors at 29.4 h).

  5. Probabilistic Flood Defence Assessment Tools

    Directory of Open Access Journals (Sweden)

    Slomp Robert

    2016-01-01

    institutions managing flood the defences, and not by just a small number of experts in probabilistic assessment. Therefore, data management and use of software are main issues that have been covered in courses and training in 2016 and 2017. All in all, this is the largest change in the assessment of Dutch flood defences since 1996. In 1996 probabilistic techniques were first introduced to determine hydraulic boundary conditions (water levels and waves (wave height, wave period and direction for different return periods. To simplify the process, the assessment continues to consist of a three-step approach, moving from simple decision rules, to the methods for semi-probabilistic assessment, and finally to a fully probabilistic analysis to compare the strength of flood defences with the hydraulic loads. The formal assessment results are thus mainly based on the fully probabilistic analysis and the ultimate limit state of the strength of a flood defence. For complex flood defences, additional models and software were developed. The current Hydra software suite (for policy analysis, formal flood defence assessment and design will be replaced by the model Ringtoets. New stand-alone software has been developed for revetments, geotechnical analysis and slope stability of the foreshore. Design software and policy analysis software, including the Delta model, will be updated in 2018. A fully probabilistic method results in more precise assessments and more transparency in the process of assessment and reconstruction of flood defences. This is of increasing importance, as large-scale infrastructural projects in a highly urbanized environment are increasingly subject to political and societal pressure to add additional features. For this reason, it is of increasing importance to be able to determine which new feature really adds to flood protection, to quantify how much its adds to the level of flood protection and to evaluate if it is really worthwhile. Please note: The Netherlands

  6. The probabilistic approach in the licensing process and the development of probabilistic risk assessment methodology in Japan

    International Nuclear Information System (INIS)

    Togo, Y.; Sato, K.

    1981-01-01

    The probabilistic approach has long seemed to be one of the most comprehensive methods for evaluating the safety of nuclear plants. So far, most of the guidelines and criteria for licensing are based on the deterministic concept. However, there have been a few examples to which the probabilistic approach was directly applied, such as the evaluation of aircraft crashes and turbine missiles. One may find other examples of such applications. However, a much more important role is now to be played by this concept, in implementing the 52 recommendations from the lessons learned from the TMI accident. To develop the probabilistic risk assessment methodology most relevant to Japanese situations, a five-year programme plan has been adopted and is to be conducted by the Japan Atomic Research Institute from fiscal 1980. Various problems have been identified and are to be solved through this programme plan. The current status of developments is described together with activities outside the government programme. (author)

  7. A General Framework for Probabilistic Characterizing Formulae

    DEFF Research Database (Denmark)

    Sack, Joshua; Zhang, Lijun

    2012-01-01

    Recently, a general framework on characteristic formulae was proposed by Aceto et al. It offers a simple theory that allows one to easily obtain characteristic formulae of many non-probabilistic behavioral relations. Our paper studies their techniques in a probabilistic setting. We provide...... a general method for determining characteristic formulae of behavioral relations for probabilistic automata using fixed-point probability logics. We consider such behavioral relations as simulations and bisimulations, probabilistic bisimulations, probabilistic weak simulations, and probabilistic forward...

  8. A probabilistic Hu-Washizu variational principle

    Science.gov (United States)

    Liu, W. K.; Belytschko, T.; Besterfield, G. H.

    1987-01-01

    A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.

  9. Probabilistic modeling of discourse-aware sentence processing.

    Science.gov (United States)

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  10. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  11. Making Probabilistic Relational Categories Learnable

    Science.gov (United States)

    Jung, Wookyoung; Hummel, John E.

    2015-01-01

    Theories of relational concept acquisition (e.g., schema induction) based on structured intersection discovery predict that relational concepts with a probabilistic (i.e., family resemblance) structure ought to be extremely difficult to learn. We report four experiments testing this prediction by investigating conditions hypothesized to facilitate…

  12. Biological sequence analysis: probabilistic models of proteins and nucleic acids

    National Research Council Canada - National Science Library

    Durbin, Richard

    1998-01-01

    ... analysis methods are now based on principles of probabilistic modelling. Examples of such methods include the use of probabilistically derived score matrices to determine the significance of sequence alignments, the use of hidden Markov models as the basis for profile searches to identify distant members of sequence families, and the inference...

  13. Systematic evaluations of probabilistic floor response spectrum generation

    International Nuclear Information System (INIS)

    Lilhanand, K.; Wing, D.W.; Tseng, W.S.

    1985-01-01

    The relative merits of the current methods for direct generation of probabilistic floor response spectra (FRS) from the prescribed design response spectra (DRS) are evaluated. The explicit probabilistic methods, which explicitly use the relationship between the power spectral density function (PSDF) and response spectra (RS), i.e., the PSDF-RS relationship, are found to have advantages for practical applications over the implicit methods. To evaluate the accuracy of the explicit methods, the root-mean-square (rms) response and the peak factor contained in the PSDF-RS relationship are systematically evaluated, especially for the narrow-band floor spectral response, by comparing the analytical results with simulation results. Based on the evaluation results, a method is recommended for practical use for the direct generation of probabilistic FRS. (orig.)

  14. Advances in probabilistic risk analysis

    International Nuclear Information System (INIS)

    Hardung von Hardung, H.

    1982-01-01

    Probabilistic risk analysis can now look back upon almost a quarter century of intensive development. The early studies, whose methods and results are still referred to occasionally, however, only permitted rough estimates to be made of the probabilities of recognizable accident scenarios, failing to provide a method which could have served as a reference base in calculating the overall risk associated with nuclear power plants. The first truly solid attempt was the Rasmussen Study and, partly based on it, the German Risk Study. In those studies, probabilistic risk analysis has been given a much more precise basis. However, new methodologies have been developed in the meantime, which allow much more informative risk studies to be carried out. They have been found to be valuable tools for management decisions with respect to backfitting, reinforcement and risk limitation. Today they are mainly applied by specialized private consultants and have already found widespread application especially in the USA. (orig.) [de

  15. Probabilistic safety goals. Phase 3 - Status report

    Energy Technology Data Exchange (ETDEWEB)

    Holmberg, J.-E. (VTT (Finland)); Knochenhauer, M. (Relcon Scandpower AB, Sundbyberg (Sweden))

    2009-07-15

    The first phase of the project (2006) described the status, concepts and history of probabilistic safety goals for nuclear power plants. The second and third phases (2007-2008) have provided guidance related to the resolution of some of the problems identified, and resulted in a common understanding regarding the definition of safety goals. The basic aim of phase 3 (2009) has been to increase the scope and level of detail of the project, and to start preparations of a guidance document. Based on the conclusions from the previous project phases, the following issues have been covered: 1) Extension of international overview. Analysis of results from the questionnaire performed within the ongoing OECD/NEA WGRISK activity on probabilistic safety criteria, including participation in the preparation of the working report for OECD/NEA/WGRISK (to be finalised in phase 4). 2) Use of subsidiary criteria and relations between these (to be finalised in phase 4). 3) Numerical criteria when using probabilistic analyses in support of deterministic safety analysis (to be finalised in phase 4). 4) Guidance for the formulation, application and interpretation of probabilistic safety criteria (to be finalised in phase 4). (LN)

  16. Probabilistic safety goals. Phase 3 - Status report

    International Nuclear Information System (INIS)

    Holmberg, J.-E.; Knochenhauer, M.

    2009-07-01

    The first phase of the project (2006) described the status, concepts and history of probabilistic safety goals for nuclear power plants. The second and third phases (2007-2008) have provided guidance related to the resolution of some of the problems identified, and resulted in a common understanding regarding the definition of safety goals. The basic aim of phase 3 (2009) has been to increase the scope and level of detail of the project, and to start preparations of a guidance document. Based on the conclusions from the previous project phases, the following issues have been covered: 1) Extension of international overview. Analysis of results from the questionnaire performed within the ongoing OECD/NEA WGRISK activity on probabilistic safety criteria, including participation in the preparation of the working report for OECD/NEA/WGRISK (to be finalised in phase 4). 2) Use of subsidiary criteria and relations between these (to be finalised in phase 4). 3) Numerical criteria when using probabilistic analyses in support of deterministic safety analysis (to be finalised in phase 4). 4) Guidance for the formulation, application and interpretation of probabilistic safety criteria (to be finalised in phase 4). (LN)

  17. Probabilistic Location-based Routing Protocol for Mobile Wireless Sensor Networks with Intermittent Communication

    Directory of Open Access Journals (Sweden)

    Sho KUMAGAI

    2015-02-01

    Full Text Available In a sensor network, sensor data messages reach the nearest stationary sink node connected to the Internet by wireless multihop transmissions. Recently, various mobile sensors are available due to advances of robotics technologies and communication technologies. A location based message-by-message routing protocol, such as Geographic Distance Routing (GEDIR is suitable for such mobile wireless networks; however, it is required for each mobile wireless sensor node to know the current locations of all its neighbor nodes. On the other hand, various intermittent communication methods for a low power consumption requirement have been proposed for wireless sensor networks. Intermittent Receiver-driven Data Transmission (IRDT is one of the most efficient methods; however, it is difficult to combine the location based routing and the intermittent communication. In order to solve this problem, this paper proposes a probabilistic approach IRDT-GEDIR with the help of one of the solutions of the secretaries problem. Here, each time a neighbor sensor node wakes up from its sleep mode, an intermediate sensor node determines whether it forwards its buffered sensor data messages to it or not based on an estimation of achieved pseudo speed of the messages. Simulation experiments show that IRDT-GEDIR achieves higher pseudo speed of sensor data message transmissions and shorter transmission delay than achieves shorter transmission delay than the two naive combinations of IRDT and GEDIR in sensor networks with mobile sensor nodes and a stationary sink node. In addition, the guideline of the estimated numbers of the neighbor nodes of each intermediate sensor node is provided based on the results of the simulation experiments to apply the probabilistic approach IRDT-GEDIR.

  18. Probabilistic Logic and Probabilistic Networks

    NARCIS (Netherlands)

    Haenni, R.; Romeijn, J.-W.; Wheeler, G.; Williamson, J.

    2009-01-01

    While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches

  19. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

    Science.gov (United States)

    Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

    2013-02-01

    Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of

  20. Accuracy of the Bethe approximation for hyperparameter estimation in probabilistic image processing

    International Nuclear Information System (INIS)

    Tanaka, Kazuyuki; Shouno, Hayaru; Okada, Masato; Titterington, D M

    2004-01-01

    We investigate the accuracy of statistical-mechanical approximations for the estimation of hyperparameters from observable data in probabilistic image processing, which is based on Bayesian statistics and maximum likelihood estimation. Hyperparameters in statistical science correspond to interactions or external fields in the statistical-mechanics context. In this paper, hyperparameters in the probabilistic model are determined so as to maximize a marginal likelihood. A practical algorithm is described for grey-level image restoration based on a Gaussian graphical model and the Bethe approximation. The algorithm corresponds to loopy belief propagation in artificial intelligence. We examine the accuracy of hyperparameter estimation when we use the Bethe approximation. It is well known that a practical algorithm for probabilistic image processing can be prescribed analytically when a Gaussian graphical model is adopted as a prior probabilistic model in Bayes' formula. We are therefore able to compare, in a numerical study, results obtained through mean-field-type approximations with those based on exact calculation

  1. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    Science.gov (United States)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  2. Predictive control for stochastic systems based on multi-layer probabilistic sets

    Directory of Open Access Journals (Sweden)

    Huaqing LIANG

    2016-04-01

    Full Text Available Aiming at a class of discrete-time stochastic systems with Markov jump features, the state-feedback predictive control problem under probabilistic constraints of input variables is researched. On the basis of the concept and method of the multi-layer probabilistic sets, the predictive controller design algorithm with the soft constraints of different probabilities is presented. Under the control of the multi-step feedback laws, the system state moves to different ellipses with specified probabilities. The stability of the system is guaranteed, the feasible region of the control problem is enlarged, and the system performance is improved. Finally, a simulation example is given to prove the effectiveness of the proposed method.

  3. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  4. EVALUATION OF MILITARY ACTIVITY IMPACT ON HUMANS THROUGH A PROBABILISTIC ECOLOGICAL RISK ASSESSMENT. EXAMPLE OF A FORMER MISSILE BASE.

    Directory of Open Access Journals (Sweden)

    Sergiy ОREL

    2015-10-01

    Full Text Available The current article provides a methodology focused on the assessment of environmental factors after termination of military activity and uses a former missile base as an example. The assessment of environmental conditions is performed through an evaluation of the risks posed by the hazardous chemicals contained by underground and surface water sources and soil to human health . Moreover, by conducting deterministic and probabilistic risk assessments, the article determines that the probabilistic assessment provides more accurate and qualitative information for decision-making on the use of environmental protection measures, which often saves financial and material resources needed for their implementation.

  5. Probabilistic programmable quantum processors

    International Nuclear Information System (INIS)

    Buzek, V.; Ziman, M.; Hillery, M.

    2004-01-01

    We analyze how to improve performance of probabilistic programmable quantum processors. We show how the probability of success of the probabilistic processor can be enhanced by using the processor in loops. In addition, we show that an arbitrary SU(2) transformations of qubits can be encoded in program state of a universal programmable probabilistic quantum processor. The probability of success of this processor can be enhanced by a systematic correction of errors via conditional loops. Finally, we show that all our results can be generalized also for qudits. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  6. Probabilistic safety assessment goals in Canada

    International Nuclear Information System (INIS)

    Snell, V.G.

    1986-01-01

    CANDU safety philosphy, both in design and in licensing, has always had a strong bias towards quantitative probabilistically-based goals derived from comparative safety. Formal probabilistic safety assessment began in Canada as a design tool. The influence of this carried over later on into the definition of the deterministic safety guidelines used in CANDU licensing. Design goals were further developed which extended the consequence/frequency spectrum of 'acceptable' events, from the two points defined by the deterministic single/dual failure analysis, to a line passing through lower and higher frequencies. Since these were design tools, a complete risk summation was not necessary, allowing a cutoff at low event frequencies while preserving the identification of the most significant safety-related events. These goals gave a logical framework for making decisions on implementing design changes proposed as a result of the Probabilistic Safety Analysis. Performing this analysis became a regulatory requirement, and the design goals remained the framework under which this was submitted. Recently, there have been initiatives to incorporate more detailed probabilistic safety goals into the regulatory process in Canada. These range from far-reaching safety optimization across society, to initiatives aimed at the nuclear industry only. The effectiveness of the latter is minor at very low and very high event frequencies; at medium frequencies, a justification against expenditures per life saved in other industries should be part of the goal setting

  7. Probabilistic commodity-flow-based focusing of monitoring activities to facilitate early detection of Phytophthora ramorum outbreaks

    Science.gov (United States)

    Steven C. McKelvey; William D. Smith; Frank Koch

    2012-01-01

    This project summary describes a probabilistic model developed with funding support from the Forest Health Monitoring Program of the Forest Service, U.S. Department of Agriculture (BaseEM Project SO-R-08-01). The model has been implemented in SODBuster, a standalone software package developed using the Java software development kit from Sun Microsystems.

  8. Probabilistic Seismic Hazard Assessment Method for Nonlinear Soil Sites based on the Hazard Spectrum of Bedrock Sites

    International Nuclear Information System (INIS)

    Hahm, Dae Gi; Seo, Jeong Moon; Choi, In Kil

    2011-01-01

    For the probabilistic safety assessment of the nuclear power plants (NPP) under seismic events, the rational probabilistic seismic hazard estimation should be performed. Generally, the probabilistic seismic hazard of NPP site is represented by the uniform hazard spectrum (UHS) for the specific annual frequency. In most case, since that the attenuation equations were defined for the bedrock sites, the standard attenuation laws cannot be applied to the general soft soil sites. Hence, for the probabilistic estimation of the seismic hazard of soft soil sites, a methodology of probabilistic seismic hazard analysis (PSHA) coupled with nonlinear dynamic analyses of the soil column are required. Two methods are commonly used for the site response analysis considering the nonlinearity of sites. The one is the deterministic method and another is the probabilistic method. In the analysis of site response, there exist many uncertainty factors such as the variation of the magnitude and frequency contents of input ground motion, and material properties of soil deposits. Hence, nowadays, it is recommended that the adoption of the probabilistic method for the PSHA of soft soil deposits considering such uncertainty factors. In this study, we estimated the amplification factor of the surface of the soft soil deposits with considering the uncertainties of the input ground motions and the soil material properties. Then, we proposed the probabilistic methodology to evaluate the UHS of the soft soil site by multiplying the amplification factor to that of the bedrock site. The proposed method was applied to four typical target sites of KNGR and APR1400 NPP site categories

  9. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  10. Probabilistic Physics of Failure-based framework for fatigue life prediction of aircraft gas turbine discs under uncertainty

    International Nuclear Information System (INIS)

    Zhu, Shun-Peng; Huang, Hong-Zhong; Peng, Weiwen; Wang, Hai-Kun; Mahadevan, Sankaran

    2016-01-01

    A probabilistic Physics of Failure-based framework for fatigue life prediction of aircraft gas turbine discs operating under uncertainty is developed. The framework incorporates the overall uncertainties appearing in a structural integrity assessment. A comprehensive uncertainty quantification (UQ) procedure is presented to quantify multiple types of uncertainty using multiplicative and additive UQ methods. In addition, the factors that contribute the most to the resulting output uncertainty are investigated and identified for uncertainty reduction in decision-making. A high prediction accuracy of the proposed framework is validated through a comparison of model predictions to the experimental results of GH4133 superalloy and full-scale tests of aero engine high-pressure turbine discs. - Highlights: • A probabilistic PoF-based framework for fatigue life prediction is proposed. • A comprehensive procedure forquantifyingmultiple types of uncertaintyis presented. • The factors that contribute most to the resulting output uncertainty are identified. • The proposed frameworkdemonstrates high prediction accuracybyfull-scale tests.

  11. A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    Science.gov (United States)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2010-01-01

    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.

  12. Development of probabilistic evaluation methodology for structural integrity of nuclear components

    International Nuclear Information System (INIS)

    Lee, Gang Yong; Yang, Jee Hyeok; Shin, Jeong Woo; Hong, Soon Won; Lee, Won Gyu; Kim, Goo Yeong

    1999-03-01

    Since integrity is very important in Nuclear Power Plants, there have been a lot of researches and several rules are provided. But these are mostly based on the concept of the deterministic fracture mechanics and in many cases, those rules are unrealistic or conservative. Therefore, the concept of the probabilistic fracture mechanics considering the realistic failure of the structure and the quantitative failure probability is introduced in many fields. There have been many researches on the probabilistic fracture mechanics in world, but a few in Korea. The final object of our research os to develop the code years. In the first year study, we obtained the concept of the probabilistic fracture mechanics by reviewing the papers about the integrity evaluation of the nuclear pressure vessel on the base of the probabilistic fracture mechanics and selected the important random variables by comparing the effects of random variables on the failure probability using the existing code

  13. A Probabilistic Approach for Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    of Structures and a probabilistic modelling of the timber material proposed in the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS). Due to the framework in the Danish Code the timber structure has to be evaluated with respect to the following criteria where at least one shall...... to criteria a) and b) the timber frame structure has one column with a reliability index a bit lower than an assumed target level. By removal three columns one by one no significant extensive failure of the entire structure or significant parts of it are obatined. Therefore the structure can be considered......A probabilistic based robustness analysis has been performed for a glulam frame structure supporting the roof over the main court in a Norwegian sports centre. The robustness analysis is based on the framework for robustness analysis introduced in the Danish Code of Practice for the Safety...

  14. Next-generation probabilistic seismicity forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Hiemer, S.

    2014-07-01

    The development of probabilistic seismicity forecasts is one of the most important tasks of seismologists at present time. Such forecasts form the basis of probabilistic seismic hazard assessment, a widely used approach to generate ground motion exceedance maps. These hazard maps guide the development of building codes, and in the absence of the ability to deterministically predict earthquakes, good building and infrastructure planning is key to prevent catastrophes. Probabilistic seismicity forecasts are models that specify the occurrence rate of earthquakes as a function of space, time and magnitude. The models presented in this thesis are time-invariant mainshock occurrence models. Accordingly, the reliable estimation of the spatial and size distribution of seismicity are of crucial importance when constructing such probabilistic forecasts. Thereby we focus on data-driven approaches to infer these distributions, circumventing the need for arbitrarily chosen external parameters and subjective expert decisions. Kernel estimation has been shown to appropriately transform discrete earthquake locations into spatially continuous probability distributions. However, we show that neglecting the information from fault networks constitutes a considerable shortcoming and thus limits the skill of these current seismicity models. We present a novel earthquake rate forecast that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults applied to Californian and European data. Our model is independent from biases caused by commonly used non-objective seismic zonations, which impose artificial borders of activity that are not expected in nature. Studying the spatial variability of the seismicity size distribution is of great importance. The b-value of the well-established empirical Gutenberg-Richter model forecasts the rates of hazard-relevant large earthquakes based on the observed rates of abundant small events. We propose a

  15. Next-generation probabilistic seismicity forecasting

    International Nuclear Information System (INIS)

    Hiemer, S.

    2014-01-01

    The development of probabilistic seismicity forecasts is one of the most important tasks of seismologists at present time. Such forecasts form the basis of probabilistic seismic hazard assessment, a widely used approach to generate ground motion exceedance maps. These hazard maps guide the development of building codes, and in the absence of the ability to deterministically predict earthquakes, good building and infrastructure planning is key to prevent catastrophes. Probabilistic seismicity forecasts are models that specify the occurrence rate of earthquakes as a function of space, time and magnitude. The models presented in this thesis are time-invariant mainshock occurrence models. Accordingly, the reliable estimation of the spatial and size distribution of seismicity are of crucial importance when constructing such probabilistic forecasts. Thereby we focus on data-driven approaches to infer these distributions, circumventing the need for arbitrarily chosen external parameters and subjective expert decisions. Kernel estimation has been shown to appropriately transform discrete earthquake locations into spatially continuous probability distributions. However, we show that neglecting the information from fault networks constitutes a considerable shortcoming and thus limits the skill of these current seismicity models. We present a novel earthquake rate forecast that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults applied to Californian and European data. Our model is independent from biases caused by commonly used non-objective seismic zonations, which impose artificial borders of activity that are not expected in nature. Studying the spatial variability of the seismicity size distribution is of great importance. The b-value of the well-established empirical Gutenberg-Richter model forecasts the rates of hazard-relevant large earthquakes based on the observed rates of abundant small events. We propose a

  16. PROBABILISTIC RELATIONAL MODELS OF COMPLETE IL-SEMIRINGS

    OpenAIRE

    Tsumagari, Norihiro

    2012-01-01

    This paper studies basic properties of probabilistic multirelations which are generalized the semantic domain of probabilistic systems and then provides two probabilistic models of complete IL-semirings using probabilistic multirelations. Also it is shown that these models need not be models of complete idempotentsemirings.

  17. A perspective of PC-based probabilistic risk assessment

    International Nuclear Information System (INIS)

    Sattison, M.B.; Rasmuson, D.M.; Robinson, R.C.; Russell, K.D.; Van Siclen, V.S.

    1987-01-01

    Probabilistic risk assessment (PRA) information has been under-utilized in the past due to the large effort required to input the PRA data and the large expense of the computers needed to run PRA codes. The microcomputer-based Integrated Reliability and Risk Analysis System (IRRAS) and the System Analysis and Risk Assessment (SARA) System, under development at the Idaho National Engineering Laboratory, have greatly enhanced the ability of managers to use PRA techniques in their decision-making. IRRAS is a tool that allows an analyst to create, modify, update, and reanalyze a plant PRA to keep the risk assessment current with the plant's configuration and operation. The SARA system is used to perform sensitivity studies on the results of a PRA. This type of analysis can be used to evaluate proposed changes to a plant or its operation. The success of these two software projects demonstrate that risk information can be made readily available to those that need it. This is the first step in the development of a true risk management capability

  18. Protein secondary structure assignment revisited: a detailed analysis of different assignment methods

    Directory of Open Access Journals (Sweden)

    de Brevern Alexandre G

    2005-09-01

    Full Text Available Abstract Background A number of methods are now available to perform automatic assignment of periodic secondary structures from atomic coordinates, based on different characteristics of the secondary structures. In general these methods exhibit a broad consensus as to the location of most helix and strand core segments in protein structures. However the termini of the segments are often ill-defined and it is difficult to decide unambiguously which residues at the edge of the segments have to be included. In addition, there is a "twilight zone" where secondary structure segments depart significantly from the idealized models of Pauling and Corey. For these segments, one has to decide whether the observed structural variations are merely distorsions or whether they constitute a break in the secondary structure. Methods To address these problems, we have developed a method for secondary structure assignment, called KAKSI. Assignments made by KAKSI are compared with assignments given by DSSP, STRIDE, XTLSSTR, PSEA and SECSTR, as well as secondary structures found in PDB files, on 4 datasets (X-ray structures with different resolution range, NMR structures. Results A detailed comparison of KAKSI assignments with those of STRIDE and PSEA reveals that KAKSI assigns slightly longer helices and strands than STRIDE in case of one-to-one correspondence between the segments. However, KAKSI tends also to favor the assignment of several short helices when STRIDE and PSEA assign longer, kinked, helices. Helices assigned by KAKSI have geometrical characteristics close to those described in the PDB. They are more linear than helices assigned by other methods. The same tendency to split long segments is observed for strands, although less systematically. We present a number of cases of secondary structure assignments that illustrate this behavior. Conclusion Our method provides valuable assignments which favor the regularity of secondary structure segments.

  19. Probabilistic Forecasting of Photovoltaic Generation: An Efficient Statistical Approach

    DEFF Research Database (Denmark)

    Wan, Can; Lin, Jin; Song, Yonghua

    2017-01-01

    This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for P...... power generation is proposed based on extreme learning machine and quantile regression, featuring high reliability and computational efficiency. The proposed approach is validated through the numerical studies on PV data from Denmark.......This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for PV...

  20. Probabilistic Model for Fatigue Crack Growth in Welded Bridge Details

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard; Yalamas, Thierry

    2013-01-01

    In the present paper a probabilistic model for fatigue crack growth in welded steel details in road bridges is presented. The probabilistic model takes the influence of bending stresses in the joints into account. The bending stresses can either be introduced by e.g. misalignment or redistribution...... of stresses in the structure. The fatigue stress ranges are estimated from traffic measurements and a generic bridge model. Based on the probabilistic models for the resistance and load the reliability is estimated for a typical welded steel detail. The results show that large misalignments in the joints can...

  1. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  2. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    Science.gov (United States)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  3. PRECIS -- A probabilistic risk assessment system

    International Nuclear Information System (INIS)

    Peterson, D.M.; Knowlton, R.G. Jr.

    1996-01-01

    A series of computer tools has been developed to conduct the exposure assessment and risk characterization phases of human health risk assessments within a probabilistic framework. The tools are collectively referred to as the Probabilistic Risk Evaluation and Characterization Investigation System (PRECIS). With this system, a risk assessor can calculate the doses and risks associated with multiple environmental and exposure pathways, for both chemicals and radioactive contaminants. Exposure assessment models in the system account for transport of contaminants to receptor points from a source zone originating in unsaturated soils above the water table. In addition to performing calculations of dose and risk based on initial concentrations, PRECIS can also be used in an inverse manner to compute soil concentrations in the source area that must not be exceeded if prescribed limits on dose or risk are to be met. Such soil contaminant levels, referred to as soil guidelines, are computed for both single contaminants and chemical mixtures and can be used as action levels or cleanup levels. Probabilistic estimates of risk, dose and soil guidelines are derived using Monte Carlo techniques

  4. A robust probabilistic collaborative representation based classification for multimodal biometrics

    Science.gov (United States)

    Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli

    2018-04-01

    Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.

  5. Class Schedule Assignment Based on Students Learning Rhythms Using A Genetic Algorithm Asignación de horarios de clase basado en los ritmos de aprendizaje de los estudiantes usando un algoritmo genético

    Directory of Open Access Journals (Sweden)

    Victor F. Suarez Chilma

    2013-03-01

    Full Text Available The objective of this proposal is to implement a school day agenda focused on the learning rhythms of students of elementary and secondary schools using a genetic algorithm. The methodology of this proposal takes into account legal requirements and constraints on the assignment of teachers and classrooms in public educational institutions in Colombia. In addition, this proposal provides a set of constraints focused on cognitive rhythms and subjects are scheduled at the most convenient times according to the area of knowledge. The genetic algorithm evolves through a process of mutation and selection and builds a total solution based on the best solutions for each group. Sixteen groups in a school are tested and the results of class schedule assignments are presented. The quality of the solution obtained through the established approach is validated by comparing the results to the solutions obtained using another algorithm.El objetivo de esta propuesta es implementar un horario escolar que tenga en cuenta los ritmos de aprendizaje en los estudiantes de educación primaria y secundaria, utilizando un algoritmo genético. La metodología considera los requerimientos legales y las restricciones necesarias para la asignación de maestros y aulas en instituciones educativas públicas de Colombia. Adicionalmente, se establecen un conjunto de restricciones relacionadas con el enfoque en los ritmos cognitivos, determinando las horas de la jornada en las que es más conveniente la ubicación de ciertas materias de acuerdo al área del conocimiento al que pertenecen. El algoritmo genético evoluciona mediante un proceso de mutación y selección, a través del cual se construye una solución completa a partir de la búsqueda de las mejores soluciones por grupo. Se presentan los resultados de las pruebas realizadas para la asignación de una institución con 16 grupos. La calidad de las soluciones obtenidas de acuerdo al enfoque establecido es validada

  6. Multiobjective Order Assignment Optimization in a Global Multiple-Factory Environment

    Directory of Open Access Journals (Sweden)

    Rong-Chang Chen

    2014-01-01

    Full Text Available In response to radically increasing competition, many manufacturers who produce time-sensitive products have expanded their production plants to worldwide sites. Given this environment, how to aggregate customer orders from around the globe and assign them quickly to the most appropriate plants is currently a crucial issue. This study proposes an effective method to solve the order assignment problem of companies with multiple plants distributed worldwide. A multiobjective genetic algorithm (MOGA is used to find solutions. To validate the effectiveness of the proposed approach, this study employs some real data, provided by a famous garment company in Taiwan, as a base to perform some experiments. In addition, the influences of orders with a wide range of quantities demanded are discussed. The results show that feasible solutions can be obtained effectively and efficiently. Moreover, if managers aim at lower total costs, they can divide a big customer order into more small manufacturing ones.

  7. Probabilistic, meso-scale flood loss modelling

    Science.gov (United States)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  8. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  9. Probabilistic Structural Analysis Theory Development

    Science.gov (United States)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  10. ISSUES ASSOCIATED WITH PROBABILISTIC FAILURE MODELING OF DIGITAL SYSTEMS

    International Nuclear Information System (INIS)

    CHU, T.L.; MARTINEZ-GURIDI, G.; LIHNER, J.; OVERLAND, D.

    2004-01-01

    The current U.S. Nuclear Regulatory Commission (NRC) licensing process of instrumentation and control (I and C) systems is based on deterministic requirements, e.g., single failure criteria, and defense in depth and diversity. Probabilistic considerations can be used as supplements to the deterministic process. The National Research Council has recommended development of methods for estimating failure probabilities of digital systems, including commercial off-the-shelf (COTS) equipment, for use in probabilistic risk assessment (PRA). NRC staff has developed informal qualitative and quantitative requirements for PRA modeling of digital systems. Brookhaven National Laboratory (BNL) has performed a review of the-state-of-the-art of the methods and tools that can potentially be used to model digital systems. The objectives of this paper are to summarize the review, discuss the issues associated with probabilistic modeling of digital systems, and identify potential areas of research that would enhance the state of the art toward a satisfactory modeling method that could be integrated with a typical probabilistic risk assessment

  11. Paternity assignment in the polyploid Acipenser dabryanus based on a novel microsatellite marker system.

    Directory of Open Access Journals (Sweden)

    Ya Liu

    Full Text Available Acipenser dabryanus is listed as a Critical Endangered species in the IUCN Red List and the first class protected animals in China. Fortunately, A. dabryanus specimens are being successfully bred in captivity for conservation. However, for effective ex situ conservation, we should be aware of the genetic diversity and the degree of relatedness of the individuals selected for breeding. In this study, we aimed at the development of novel and reliable microsatellites used for the genetic study of A. dabryanus. A total of 14,321 simple sequence repeats (SSRs were detected by transcriptome sequencing and screening. We selected 20 novel and polymorphic microsatellites (non-dinucleotide with good repeatability from the 100 tested loci for a subsequent genetic and paternity study. A set of captive broodstock (F1 stock, n = 43 and their offspring (F2 stock, n = 96 were used to examine the efficiency of the 20 SSRs for assigning parentage to offspring, with an allocation success of 91.7%. We also found that only a few families predominantly contributed to the progeny produced by the 43 breeders. In addition, mitochondrial DNA data showed that the captive broodstock (F1 individuals had an excellent probability of the same lineage, implying that a high level of inbreeding may have occurred in these individuals. Our research provides useful information on genetic diversity and reproductive pattern of A. dabryanus, and the 20 SSRs developed in this study can be applied to the future breeding program to avoid inbreeding for this stock or other related species of Acipenseriformes.

  12. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  13. The dialectical thinking about deterministic and probabilistic safety analysis

    International Nuclear Information System (INIS)

    Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong

    2005-01-01

    There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)

  14. Sequential backbone assignment based on dipolar amide-to-amide correlation experiments

    Energy Technology Data Exchange (ETDEWEB)

    Xiang, ShengQi; Grohe, Kristof; Rovó, Petra; Vasa, Suresh Kumar; Giller, Karin; Becker, Stefan; Linser, Rasmus, E-mail: rali@nmr.mpibpc.mpg.de [Max Planck Institute for Biophysical Chemistry, Department for NMR-Based Structural Biology (Germany)

    2015-07-15

    Proton detection in solid-state NMR has seen a tremendous increase in popularity in the last years. New experimental techniques allow to exploit protons as an additional source of information on structure, dynamics, and protein interactions with their surroundings. In addition, sensitivity is mostly improved and ambiguity in assignment experiments reduced. We show here that, in the solid state, sequential amide-to-amide correlations turn out to be an excellent, complementary way to exploit amide shifts for unambiguous backbone assignment. For a general assessment, we compare amide-to-amide experiments with the more common {sup 13}C-shift-based methods. Exploiting efficient CP magnetization transfers rather than less efficient INEPT periods, our results suggest that the approach is very feasible for solid-state NMR.

  15. Sequential backbone assignment based on dipolar amide-to-amide correlation experiments

    International Nuclear Information System (INIS)

    Xiang, ShengQi; Grohe, Kristof; Rovó, Petra; Vasa, Suresh Kumar; Giller, Karin; Becker, Stefan; Linser, Rasmus

    2015-01-01

    Proton detection in solid-state NMR has seen a tremendous increase in popularity in the last years. New experimental techniques allow to exploit protons as an additional source of information on structure, dynamics, and protein interactions with their surroundings. In addition, sensitivity is mostly improved and ambiguity in assignment experiments reduced. We show here that, in the solid state, sequential amide-to-amide correlations turn out to be an excellent, complementary way to exploit amide shifts for unambiguous backbone assignment. For a general assessment, we compare amide-to-amide experiments with the more common 13 C-shift-based methods. Exploiting efficient CP magnetization transfers rather than less efficient INEPT periods, our results suggest that the approach is very feasible for solid-state NMR

  16. Probabilistic Graph Layout for Uncertain Network Visualization.

    Science.gov (United States)

    Schulz, Christoph; Nocaj, Arlind; Goertler, Jochen; Deussen, Oliver; Brandes, Ulrik; Weiskopf, Daniel

    2017-01-01

    We present a novel uncertain network visualization technique based on node-link diagrams. Nodes expand spatially in our probabilistic graph layout, depending on the underlying probability distributions of edges. The visualization is created by computing a two-dimensional graph embedding that combines samples from the probabilistic graph. A Monte Carlo process is used to decompose a probabilistic graph into its possible instances and to continue with our graph layout technique. Splatting and edge bundling are used to visualize point clouds and network topology. The results provide insights into probability distributions for the entire network-not only for individual nodes and edges. We validate our approach using three data sets that represent a wide range of network types: synthetic data, protein-protein interactions from the STRING database, and travel times extracted from Google Maps. Our approach reveals general limitations of the force-directed layout and allows the user to recognize that some nodes of the graph are at a specific position just by chance.

  17. Some probabilistic aspects of fracture

    International Nuclear Information System (INIS)

    Thomas, J.M.

    1982-01-01

    Some probabilistic aspects of fracture in structural and mechanical components are examined. The principles of fracture mechanics, material quality and inspection uncertainty are formulated into a conceptual and analytical framework for prediction of failure probability. The role of probabilistic fracture mechanics in a more global context of risk and optimization of decisions is illustrated. An example, where Monte Carlo simulation was used to implement a probabilistic fracture mechanics analysis, is discussed. (orig.)

  18. OCA-P, PWR Vessel Probabilistic Fracture Mechanics

    International Nuclear Information System (INIS)

    Cheverton, R.D.; Ball, D.G.

    2001-01-01

    1 - Description of program or function: OCA-P is a probabilistic fracture-mechanics code prepared specifically for evaluating the integrity of pressurized-water reactor vessels subjected to overcooling-accident loading conditions. Based on linear-elastic fracture mechanics, it has two- and limited three-dimensional flaw capability, and can treat cladding as a discrete region. Both deterministic and probabilistic analyses can be performed. For deterministic analysis, it is possible to conduct a search for critical values of the fluence and the nil-ductility reference temperature corresponding to incipient initiation of the initial flaw. The probabilistic portion of OCA-P is based on Monte Carlo techniques, and simulated parameters include fluence, flaw depth, fracture toughness, nil-ductility reference temperature, and concentrations of copper, nickel, and phosphorous. Plotting capabilities include the construction of critical-crack-depth diagrams (deterministic analysis) and a variety of histograms (probabilistic analysis). 2 - Method of solution: OAC-P accepts as input the reactor primary- system pressure and the reactor pressure-vessel downcomer coolant temperature, as functions of time in the specified transient. Then, the wall temperatures and stresses are calculated as a function of time and radial position in the wall, and the fracture-mechanics analysis is performed to obtain the stress intensity factors as a function of crack depth and time in the transient. In a deterministic analysis, values of the static crack initiation toughness and the crack arrest toughness are also calculated for all crack depths and times in the transient. A comparison of these values permits an evaluation of flaw behavior. For a probabilistic analysis, OCA-P generates a large number of reactor pressure vessels, each with a different combination of the various values of the parameters involved in the analysis of flaw behavior. For each of these vessels, a deterministic fracture

  19. Genetic screening and testing in an episode-based payment model: preserving patient autonomy.

    Science.gov (United States)

    Sutherland, Sharon; Farrell, Ruth M; Lockwood, Charles

    2014-11-01

    The State of Ohio is implementing an episode-based payment model for perinatal care. All costs of care will be tabulated for each live birth and assigned to the delivering provider, creating a three-tiered model for reimbursement for care. Providers will be reimbursed as usual for care that is average in cost and quality, while instituting rewards or penalties for those outside the expected range in either domain. There are few exclusions, and all methods of genetic screening and diagnostic testing are included in the episode cost calculation as proposed. Prenatal ultrasonography, genetic screening, and diagnostic testing are critical components of the delivery of high-quality, evidence-based prenatal care. These tests provide pregnant women with key information about the pregnancy, which, in turn, allows them to work closely with their health care provider to determine optimal prenatal care. The concepts of informed consent and decision-making, cornerstones of the ethical practice of medicine, are founded on the principles of autonomy and respect for persons. These principles recognize that patients' rights to make choices and take actions are based on their personal beliefs and values. Given the personal nature of such decisions, it is critical that patients have unbarred access to prenatal genetic tests if they elect to use them as part of their prenatal care. The proposed restructuring of reimbursement creates a clear conflict between patient autonomy and physician financial incentives.

  20. Risk-Based Predictive Maintenance for Safety-Critical Systems by Using Probabilistic Inference

    Directory of Open Access Journals (Sweden)

    Tianhua Xu

    2013-01-01

    Full Text Available Risk-based maintenance (RBM aims to improve maintenance planning and decision making by reducing the probability and consequences of failure of equipment. A new predictive maintenance strategy that integrates dynamic evolution model and risk assessment is proposed which can be used to calculate the optimal maintenance time with minimal cost and safety constraints. The dynamic evolution model provides qualified risks by using probabilistic inference with bucket elimination and gives the prospective degradation trend of a complex system. Based on the degradation trend, an optimal maintenance time can be determined by minimizing the expected maintenance cost per time unit. The effectiveness of the proposed method is validated and demonstrated by a collision accident of high-speed trains with obstacles in the presence of safety and cost constrains.

  1. Probabilistic Capacity of a Grid connected Wind Farm

    DEFF Research Database (Denmark)

    Zhao, Menghua; Chen, Zhe; Blaabjerg, Frede

    2005-01-01

    This paper proposes a method to find the maximum acceptable wind power injection regarding the thermal limits, steady state stability limits and voltage limits of the grid system. The probabilistic wind power is introduced based on the probability distribution of wind speed. Based on Power Transfer...... Distribution Factor (PTDF) and voltage sensitivities, a predictor-corrector method is suggested to calculate the acceptable active power injection. Then this method is combined with the probabilistic model of wind power to compute the allowable capacity of the wind farm. Finally, an example is illustrated...... to test this method. It is concluded that proposed method in this paper is a feasible, fast, and accurate approach to find the size of a wind farm....

  2. A fuzzy-based particle swarm optimisation approach for task assignment in home healthcare

    Directory of Open Access Journals (Sweden)

    Mutingi, Michael

    2014-11-01

    Full Text Available Home healthcare (HHC organisations provide coordinated healthcare services to patients at their homes. Motivated by the ever-increasing need for home-based care, the assignment of tasks to available healthcare staff is a common and complex problem in homecare organisations. Designing high quality task schedules is critical for improving worker morale, job satisfaction, service efficiency, service quality, and competitiveness over the long term. The desire is to provide high quality task assignment schedules that satisfy the patient, the care worker, and the management. This translates to maximising schedule fairness in terms of workload assignments, avoiding task time window violation, and meeting management goals as much as possible. However, in practice, these desires are often subjective as they involve imprecise human perceptions. This paper develops a fuzzy multi-criteria particle swarm optimisation (FPSO approach for task assignment in a home healthcare setting in a fuzzy environment. The proposed approach uses a fuzzy evaluation method from a multi-criteria point of view. Results from illustrative computational experiments show that the approach is promising.

  3. Probabilistic Output Analysis by Program Manipulation

    DEFF Research Database (Denmark)

    Rosendahl, Mads; Kirkeby, Maja Hanne

    2015-01-01

    The aim of a probabilistic output analysis is to derive a probability distribution of possible output values for a program from a probability distribution of its input. We present a method for performing static output analysis, based on program transformation techniques. It generates a probability...

  4. Ambient Surveillance by Probabilistic-Possibilistic Perception

    NARCIS (Netherlands)

    Bittermann, M.S.; Ciftcioglu, O.

    2013-01-01

    A method for quantifying ambient surveillance is presented, which is based on probabilistic-possibilistic perception. The human surveillance of a scene through observing camera sensed images on a monitor is modeled in three steps. First immersion of the observer is simulated by modeling perception

  5. Formulation of probabilistic models of protein structure in atomic detail using the reference ratio method

    DEFF Research Database (Denmark)

    Valentin, Jan B.; Andreetta, Christian; Boomsma, Wouter

    2014-01-01

    We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length s....... The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications. © 2013 Wiley Periodicals, Inc....

  6. Quantum probabilistic logic programming

    Science.gov (United States)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  7. Characteristics of the evolution of cooperation by the probabilistic peer-punishment based on the difference of payoff

    International Nuclear Information System (INIS)

    Ohdaira, Tetsushi

    2017-01-01

    Highlights: • The probabilistic peer-punishment based on the difference of payoff is introduced. • The characteristics of the evolution of cooperation are studied. • Those characteristics present the significant contribution to knowledge. - Abstract: Regarding costly punishment of two types, especially peer-punishment is considered to decrease the average payoff of all players as well as pool-punishment does, and to facilitate the antisocial punishment as a result of natural selection. To solve those problems, the author has proposed the probabilistic peer-punishment based on the difference of payoff. In the limited condition, the proposed peer-punishment has shown the positive effects on the evolution of cooperation, and increased the average payoff of all players. Based on those findings, this study exhibits the characteristics of the evolution of cooperation by the proposed peer-punishment. Those characteristics present the significant contribution to knowledge that for the evolution of cooperation, a limited number of players should cause severe damage to defectors at the large expense of their payoff when connections between them are sparse, whereas a greater number of players should share the responsibility to punish defectors at the relatively small expense of their payoff when connections between them are dense.

  8. Probabilistic predictive modelling of carbon nanocomposites for medical implants design.

    Science.gov (United States)

    Chua, Matthew; Chui, Chee-Kong

    2015-04-01

    Modelling of the mechanical properties of carbon nanocomposites based on input variables like percentage weight of Carbon Nanotubes (CNT) inclusions is important for the design of medical implants and other structural scaffolds. Current constitutive models for the mechanical properties of nanocomposites may not predict well due to differences in conditions, fabrication techniques and inconsistencies in reagents properties used across industries and laboratories. Furthermore, the mechanical properties of the designed products are not deterministic, but exist as a probabilistic range. A predictive model based on a modified probabilistic surface response algorithm is proposed in this paper to address this issue. Tensile testing of three groups of different CNT weight fractions of carbon nanocomposite samples displays scattered stress-strain curves, with the instantaneous stresses assumed to vary according to a normal distribution at a specific strain. From the probabilistic density function of the experimental data, a two factors Central Composite Design (CCD) experimental matrix based on strain and CNT weight fraction input with their corresponding stress distribution was established. Monte Carlo simulation was carried out on this design matrix to generate a predictive probabilistic polynomial equation. The equation and method was subsequently validated with more tensile experiments and Finite Element (FE) studies. The method was subsequently demonstrated in the design of an artificial tracheal implant. Our algorithm provides an effective way to accurately model the mechanical properties in implants of various compositions based on experimental data of samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Towards fully automated structure-based NMR resonance assignment of 15N-labeled proteins from automatically picked peaks

    KAUST Repository

    Jang, Richard; Gao, Xin; Li, Ming

    2011-01-01

    In NMR resonance assignment, an indispensable step in NMR protein studies, manually processed peaks from both N-labeled and C-labeled spectra are typically used as inputs. However, the use of homologous structures can allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data. We propose a novel integer programming framework for structure-based backbone resonance assignment using N-labeled data. The core consists of a pair of integer programming models: one for spin system forming and amino acid typing, and the other for backbone resonance assignment. The goal is to perform the assignment directly from spectra without any manual intervention via automatically picked peaks, which are much noisier than manually picked peaks, so methods must be error-tolerant. In the case of semi-automated/manually processed peak data, we compare our system with the Xiong-Pandurangan-Bailey- Kellogg's contact replacement (CR) method, which is the most error-tolerant method for structure-based resonance assignment. Our system, on average, reduces the error rate of the CR method by five folds on their data set. In addition, by using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for human ubiquitin, where the typing accuracy is 83%, we achieve 91% accuracy, compared to the 59% accuracy obtained without correcting for such errors. In the case of automatically picked peaks, using assignment information from yeast ubiquitin, we achieve a fully automatic assignment with 97% accuracy. To our knowledge, this is the first system that can achieve fully automatic structure-based assignment directly from spectra. This has implications in NMR protein mutant studies, where the assignment step is repeated for each mutant. © Copyright 2011, Mary Ann Liebert, Inc.

  10. Towards fully automated structure-based NMR resonance assignment of 15N-labeled proteins from automatically picked peaks

    KAUST Repository

    Jang, Richard

    2011-03-01

    In NMR resonance assignment, an indispensable step in NMR protein studies, manually processed peaks from both N-labeled and C-labeled spectra are typically used as inputs. However, the use of homologous structures can allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data. We propose a novel integer programming framework for structure-based backbone resonance assignment using N-labeled data. The core consists of a pair of integer programming models: one for spin system forming and amino acid typing, and the other for backbone resonance assignment. The goal is to perform the assignment directly from spectra without any manual intervention via automatically picked peaks, which are much noisier than manually picked peaks, so methods must be error-tolerant. In the case of semi-automated/manually processed peak data, we compare our system with the Xiong-Pandurangan-Bailey- Kellogg\\'s contact replacement (CR) method, which is the most error-tolerant method for structure-based resonance assignment. Our system, on average, reduces the error rate of the CR method by five folds on their data set. In addition, by using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for human ubiquitin, where the typing accuracy is 83%, we achieve 91% accuracy, compared to the 59% accuracy obtained without correcting for such errors. In the case of automatically picked peaks, using assignment information from yeast ubiquitin, we achieve a fully automatic assignment with 97% accuracy. To our knowledge, this is the first system that can achieve fully automatic structure-based assignment directly from spectra. This has implications in NMR protein mutant studies, where the assignment step is repeated for each mutant. © Copyright 2011, Mary Ann Liebert, Inc.

  11. Probabilistic methods used in NUSS

    International Nuclear Information System (INIS)

    Fischer, J.; Giuliani, P.

    1985-01-01

    Probabilistic considerations are used implicitly or explicitly in all technical areas. In the NUSS codes and guides the two areas of design and siting are those where more use is made of these concepts. A brief review of the relevant documents in these two areas is made in this paper. It covers the documents where either probabilistic considerations are implied or where probabilistic approaches are recommended in the evaluation of situations and of events. In the siting guides the review mainly covers the area of seismic hydrological and external man-made events analysis, as well as some aspects of meteorological extreme events analysis. Probabilistic methods are recommended in the design guides but they are not made a requirement. There are several reasons for this, mainly lack of reliable data and the absence of quantitative safety limits or goals against which to judge the design analysis. As far as practical, engineering judgement should be backed up by quantitative probabilistic analysis. Examples are given and the concept of design basis as used in NUSS design guides is explained. (author)

  12. Probabilistic thinking to support early evaluation of system quality: through requirement analysis

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Bonnema, Gerrit Maarten

    2014-01-01

    This paper focuses on coping with system quality in the early phases of design where there is lack of knowledge about a system, its functions or its architect. The paper encourages knowledge based evaluation of system quality and promotes probabilistic thinking. It states that probabilistic thinking

  13. Probabilistic and sensitivity analysis of Botlek Bridge structures

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2017-01-01

    Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.

  14. Constant Jacobian Matrix-Based Stochastic Galerkin Method for Probabilistic Load Flow

    Directory of Open Access Journals (Sweden)

    Yingyun Sun

    2016-03-01

    Full Text Available An intrusive spectral method of probabilistic load flow (PLF is proposed in the paper, which can handle the uncertainties arising from renewable energy integration. Generalized polynomial chaos (gPC expansions of dependent random variables are utilized to build a spectral stochastic representation of PLF model. Instead of solving the coupled PLF model with a traditional, cumbersome method, a modified stochastic Galerkin (SG method is proposed based on the P-Q decoupling properties of load flow in power system. By introducing two pre-calculated constant sparse Jacobian matrices, the computational burden of the SG method is significantly reduced. Two cases, IEEE 14-bus and IEEE 118-bus systems, are used to verify the computation speed and efficiency of the proposed method.

  15. Optimization of linear consecutive-k-out-of-n system with a Birnbaum importance-based genetic algorithm

    International Nuclear Information System (INIS)

    Cai, Zhiqiang; Si, Shubin; Sun, Shudong; Li, Caitao

    2016-01-01

    The optimization of linear consecutive-k-out-of-n (Lin/Con/k/n) is to find an optimal component arrangement where n components are assigned to n positions to maximize the system reliability. With the interchangeability of components in practical systems, the optimization of Lin/Con/k/n systems is becoming widely applied in engineering practice, which is also a typical component assignment problem concerned by many researchers. This paper proposes a Birnbaum importance-based genetic algorithm (BIGA) to search the near global optimal solution for Lin/Con/k/n systems. First, the operation procedures and corresponding execution methods of BIGA are described in detail. Then, comprehensive simulation experiments are implemented on both small and large systems to evaluate the performance of the BIGA by comparing with the Birnbaum importance-based two-stage approach and Birnbaum importance-based genetic local search algorithm. Thirdly, further experiments are provided to discuss the applicability of BIGA for Lin/Con/k/n system with different k and n. Finally, the case study on oil transportation system is implemented to demonstrate the application of BIGA in the optimization of Lin/Con/k/n system. - Highlights: • BIGA integrates BI and GA to solve the Lin/Con/k/n systems optimization problems. • The experiment results show that the BIGA performs well in most conditions. • Suggestions are given for the application of BIGA and BITA with different k and n. • The application procedure of BIGA is demonstrated by the oil transportation system.

  16. Probabilistic Resource Analysis by Program Transformation

    DEFF Research Database (Denmark)

    Kirkeby, Maja Hanne; Rosendahl, Mads

    2016-01-01

    The aim of a probabilistic resource analysis is to derive a probability distribution of possible resource usage for a program from a probability distribution of its input. We present an automated multi-phase rewriting based method to analyze programs written in a subset of C. It generates...

  17. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    International Nuclear Information System (INIS)

    Elkhoraibi, T.; Hashemi, A.; Ostadan, F.

    2014-01-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  18. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    Energy Technology Data Exchange (ETDEWEB)

    Elkhoraibi, T., E-mail: telkhora@bechtel.com; Hashemi, A.; Ostadan, F.

    2014-04-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  19. Optimization of the test intervals of a nuclear safety system by genetic algorithms, solution clustering and fuzzy preference assignment

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2010-01-01

    In this paper, a procedure is developed for identifying a number of representative solutions manageable for decision-making in a multiobjective optimization problem concerning the test intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are identified by a genetic algorithm and then clustered by subtractive clustering into 'families'. On the basis of the decision maker's preferences, each family is then synthetically represented by a 'head of the family' solution. This is done by introducing a scoring system that ranks the solutions with respect to the different objectives: a fuzzy preference assignment is employed to this purpose. Level Diagrams are then used to represent, analyze and interpret the Pareto Fronts reduced to the head-of-the-family solutions

  20. A robust algorithm to solve the signal setting problem considering different traffic assignment approaches

    Directory of Open Access Journals (Sweden)

    Adacher Ludovica

    2017-12-01

    Full Text Available In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM, particle swarm optimization (PSO and the genetic algorithm (GA are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA. Numerical experiments on a real test network are reported.

  1. Probabilistic simulation applications to reliability assessments

    International Nuclear Information System (INIS)

    Miller, Ian; Nutt, Mark W.; Hill, Ralph S. III

    2003-01-01

    Probabilistic risk/reliability (PRA) analyses for engineered systems are conventionally based on fault-tree methods. These methods are mature and efficient, and are well suited to systems consisting of interacting components with known, low probabilities of failure. Even complex systems, such as nuclear power plants or aircraft, are modeled by the careful application of these approaches. However, for systems that may evolve in complex and nonlinear ways, and where the performance of components may be a sensitive function of the history of their working environments, fault-tree methods can be very demanding. This paper proposes an alternative method of evaluating such systems, based on probabilistic simulation using intelligent software objects to represent the components of such systems. Using a Monte Carlo approach, simulation models can be constructed from relatively simple interacting objects that capture the essential behavior of the components that they represent. Such models are capable of reflecting the complex behaviors of the systems that they represent in a natural and realistic way. (author)

  2. Evaluation of Probabilistic Disease Forecasts.

    Science.gov (United States)

    Hughes, Gareth; Burnett, Fiona J

    2017-10-01

    The statistical evaluation of probabilistic disease forecasts often involves calculation of metrics defined conditionally on disease status, such as sensitivity and specificity. However, for the purpose of disease management decision making, metrics defined conditionally on the result of the forecast-predictive values-are also important, although less frequently reported. In this context, the application of scoring rules in the evaluation of probabilistic disease forecasts is discussed. An index of separation with application in the evaluation of probabilistic disease forecasts, described in the clinical literature, is also considered and its relation to scoring rules illustrated. Scoring rules provide a principled basis for the evaluation of probabilistic forecasts used in plant disease management. In particular, the decomposition of scoring rules into interpretable components is an advantageous feature of their application in the evaluation of disease forecasts.

  3. Epidemiological tracking and population assignment of the non-clonal bacterium, Burkholderia pseudomallei.

    Science.gov (United States)

    Dale, Julia; Price, Erin P; Hornstra, Heidie; Busch, Joseph D; Mayo, Mark; Godoy, Daniel; Wuthiekanun, Vanaporn; Baker, Anthony; Foster, Jeffrey T; Wagner, David M; Tuanyok, Apichai; Warner, Jeffrey; Spratt, Brian G; Peacock, Sharon J; Currie, Bart J; Keim, Paul; Pearson, Talima

    2011-12-01

    Rapid assignment of bacterial pathogens into predefined populations is an important first step for epidemiological tracking. For clonal species, a single allele can theoretically define a population. For non-clonal species such as Burkholderia pseudomallei, however, shared allelic states between distantly related isolates make it more difficult to identify population defining characteristics. Two distinct B. pseudomallei populations have been previously identified using multilocus sequence typing (MLST). These populations correlate with the major foci of endemicity (Australia and Southeast Asia). Here, we use multiple Bayesian approaches to evaluate the compositional robustness of these populations, and provide assignment results for MLST sequence types (STs). Our goal was to provide a reference for assigning STs to an established population without the need for further computational analyses. We also provide allele frequency results for each population to enable estimation of population assignment even when novel STs are discovered. The ability for humans and potentially contaminated goods to move rapidly across the globe complicates the task of identifying the source of an infection or outbreak. Population genetic dynamics of B. pseudomallei are particularly complicated relative to other bacterial pathogens, but the work here provides the ability for broad scale population assignment. As there is currently no independent empirical measure of successful population assignment, we provide comprehensive analytical details of our comparisons to enable the reader to evaluate the robustness of population designations and assignments as they pertain to individual research questions. Finer scale subdivision and verification of current population compositions will likely be possible with genotyping data that more comprehensively samples the genome. The approach used here may be valuable for other non-clonal pathogens that lack simple group-defining genetic characteristics

  4. Epidemiological tracking and population assignment of the non-clonal bacterium, Burkholderia pseudomallei.

    Directory of Open Access Journals (Sweden)

    Julia Dale

    2011-12-01

    Full Text Available Rapid assignment of bacterial pathogens into predefined populations is an important first step for epidemiological tracking. For clonal species, a single allele can theoretically define a population. For non-clonal species such as Burkholderia pseudomallei, however, shared allelic states between distantly related isolates make it more difficult to identify population defining characteristics. Two distinct B. pseudomallei populations have been previously identified using multilocus sequence typing (MLST. These populations correlate with the major foci of endemicity (Australia and Southeast Asia. Here, we use multiple Bayesian approaches to evaluate the compositional robustness of these populations, and provide assignment results for MLST sequence types (STs. Our goal was to provide a reference for assigning STs to an established population without the need for further computational analyses. We also provide allele frequency results for each population to enable estimation of population assignment even when novel STs are discovered. The ability for humans and potentially contaminated goods to move rapidly across the globe complicates the task of identifying the source of an infection or outbreak. Population genetic dynamics of B. pseudomallei are particularly complicated relative to other bacterial pathogens, but the work here provides the ability for broad scale population assignment. As there is currently no independent empirical measure of successful population assignment, we provide comprehensive analytical details of our comparisons to enable the reader to evaluate the robustness of population designations and assignments as they pertain to individual research questions. Finer scale subdivision and verification of current population compositions will likely be possible with genotyping data that more comprehensively samples the genome. The approach used here may be valuable for other non-clonal pathogens that lack simple group-defining genetic

  5. Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair.

    Science.gov (United States)

    Lee, Young-Joo; Kim, Robin E; Suh, Wonho; Park, Kiwon

    2017-04-24

    Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed.

  6. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F. [ORNL; Poore III, Willis P. [ORNL; Muhlheim, Michael David [ORNL

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

  7. Probabilistic Models for Solar Particle Events

    Science.gov (United States)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  8. Probabilistic design framework for sustainable repari and rehabilitation of civil infrastructure

    DEFF Research Database (Denmark)

    Lepech, Michael; Geiker, Mette Rica; Stang, Henrik

    2011-01-01

    This paper presents a probabilistic-based framework for the design of civil infrastructure repair and rehabilitation to achieve targeted improvements in sustainability indicators. The framework consists of two types of models: (i) service life prediction models combining one or several deteriorat......This paper presents a probabilistic-based framework for the design of civil infrastructure repair and rehabilitation to achieve targeted improvements in sustainability indicators. The framework consists of two types of models: (i) service life prediction models combining one or several...

  9. Probabilistic Tractography of the Cranial Nerves in Vestibular Schwannoma.

    Science.gov (United States)

    Zolal, Amir; Juratli, Tareq A; Podlesek, Dino; Rieger, Bernhard; Kitzler, Hagen H; Linn, Jennifer; Schackert, Gabriele; Sobottka, Stephan B

    2017-11-01

    Multiple recent studies have reported on diffusion tensor-based fiber tracking of cranial nerves in vestibular schwannoma, with conflicting results as to the accuracy of the method and the occurrence of cochlear nerve depiction. Probabilistic nontensor-based tractography might offer advantages in terms of better extraction of directional information from the underlying data in cranial nerves, which are of subvoxel size. Twenty-one patients with large vestibular schwannomas were recruited. The probabilistic tracking was run preoperatively and the position of the potential depictions of the facial and cochlear nerves was estimated postoperatively by 3 independent observers in a blinded fashion. The true position of the nerve was determined intraoperatively by the surgeon. Thereafter, the imaging-based estimated position was compared with the intraoperatively determined position. Tumor size, cystic appearance, and postoperative House-Brackmann score were analyzed with regard to the accuracy of the depiction of the nerves. The probabilistic tracking showed a connection that correlated to the position of the facial nerve in 81% of the cases and to the position of the cochlear nerve in 33% of the cases. Altogether, the resulting depiction did not correspond to the intraoperative position of any of the nerves in 3 cases. In a majority of cases, the position of the facial nerve, but not of the cochlear nerve, could be estimated by evaluation of the probabilistic tracking results. However, false depictions not corresponding to any nerve do occur and cannot be discerned as such from the image only. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. A Unified Probabilistic Framework for Dose-Response Assessment of Human Health Effects.

    Science.gov (United States)

    Chiu, Weihsueh A; Slob, Wout

    2015-12-01

    When chemical health hazards have been identified, probabilistic dose-response assessment ("hazard characterization") quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. We developed a unified framework for probabilistic dose-response assessment. We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose-response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, "effect metrics" can be specified to define "toxicologically equivalent" sizes for this underlying individual response; and d) dose-response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose-response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Probabilistically derived exposure limits are based on estimating a "target human dose" (HDMI), which requires risk management-informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%-10% effect sizes. Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions.

  11. Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device

    Directory of Open Access Journals (Sweden)

    Xiang He

    2015-12-01

    Full Text Available Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer, wireless signal strength indicators (WiFi, Bluetooth, Zigbee, and visual sensors (LiDAR, camera. People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design.

  12. The evolution of the mitochondrial genetic code in arthropods revisited.

    Science.gov (United States)

    Abascal, Federico; Posada, David; Zardoya, Rafael

    2012-04-01

    A variant of the invertebrate mitochondrial genetic code was previously identified in arthropods (Abascal et al. 2006a, PLoS Biol 4:e127) in which, instead of translating the AGG codon as serine, as in other invertebrates, some arthropods translate AGG as lysine. Here, we revisit the evolution of the genetic code in arthropods taking into account that (1) the number of arthropod mitochondrial genomes sequenced has triplicated since the original findings were published; (2) the phylogeny of arthropods has been recently resolved with confidence for many groups; and (3) sophisticated probabilistic methods can be applied to analyze the evolution of the genetic code in arthropod mitochondria. According to our analyses, evolutionary shifts in the genetic code have been more common than previously inferred, with many taxonomic groups displaying two alternative codes. Ancestral character-state reconstruction using probabilistic methods confirmed that the arthropod ancestor most likely translated AGG as lysine. Point mutations at tRNA-Lys and tRNA-Ser correlated with the meaning of the AGG codon. In addition, we identified three variables (GC content, number of AGG codons, and taxonomic information) that best explain the use of each of the two alternative genetic codes.

  13. The Genetic Privacy Act and commentary

    Energy Technology Data Exchange (ETDEWEB)

    Annas, G.J.; Glantz, L.H.; Roche, P.A.

    1995-02-28

    The Genetic Privacy Act is a proposal for federal legislation. The Act is based on the premise that genetic information is different from other types of personal information in ways that require special protection. The DNA molecule holds an extensive amount of currently indecipherable information. The major goal of the Human Genome Project is to decipher this code so that the information it contains is accessible. The privacy question is, accessible to whom? The highly personal nature of the information contained in DNA can be illustrated by thinking of DNA as containing an individual`s {open_quotes}future diary.{close_quotes} A diary is perhaps the most personal and private document a person can create. It contains a person`s innermost thoughts and perceptions, and is usually hidden and locked to assure its secrecy. Diaries describe the past. The information in one`s genetic code can be thought of as a coded probabilistic future diary because it describes an important part of a unique and personal future. This document presents an introduction to the proposal for federal legislation `the Genetic Privacy Act`; a copy of the proposed act; and comment.

  14. An automated framework for NMR resonance assignment through simultaneous slice picking and spin system forming

    KAUST Repository

    Abbas, Ahmed

    2014-04-19

    Despite significant advances in automated nuclear magnetic resonance-based protein structure determination, the high numbers of false positives and false negatives among the peaks selected by fully automated methods remain a problem. These false positives and negatives impair the performance of resonance assignment methods. One of the main reasons for this problem is that the computational research community often considers peak picking and resonance assignment to be two separate problems, whereas spectroscopists use expert knowledge to pick peaks and assign their resonances at the same time. We propose a novel framework that simultaneously conducts slice picking and spin system forming, an essential step in resonance assignment. Our framework then employs a genetic algorithm, directed by both connectivity information and amino acid typing information from the spin systems, to assign the spin systems to residues. The inputs to our framework can be as few as two commonly used spectra, i.e., CBCA(CO)NH and HNCACB. Different from the existing peak picking and resonance assignment methods that treat peaks as the units, our method is based on \\'slices\\', which are one-dimensional vectors in three-dimensional spectra that correspond to certain (N, H) values. Experimental results on both benchmark simulated data sets and four real protein data sets demonstrate that our method significantly outperforms the state-of-the-art methods while using a less number of spectra than those methods. Our method is freely available at http://sfb.kaust.edu.sa/Pages/Software.aspx. © 2014 Springer Science+Business Media.

  15. Probabilistic broadcasting of mixed states

    International Nuclear Information System (INIS)

    Li Lvjun; Li Lvzhou; Wu Lihua; Zou Xiangfu; Qiu Daowen

    2009-01-01

    It is well known that the non-broadcasting theorem proved by Barnum et al is a fundamental principle of quantum communication. As we are aware, optimal broadcasting (OB) is the only method to broadcast noncommuting mixed states approximately. In this paper, motivated by the probabilistic cloning of quantum states proposed by Duan and Guo, we propose a new way for broadcasting noncommuting mixed states-probabilistic broadcasting (PB), and we present a sufficient condition for PB of mixed states. To a certain extent, we generalize the probabilistic cloning theorem from pure states to mixed states, and in particular, we generalize the non-broadcasting theorem, since the case that commuting mixed states can be exactly broadcast can be thought of as a special instance of PB where the success ratio is 1. Moreover, we discuss probabilistic local broadcasting (PLB) of separable bipartite states

  16. Solving multiconstraint assignment problems using learning automata.

    Science.gov (United States)

    Horn, Geir; Oommen, B John

    2010-02-01

    This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the

  17. Probabilistic peak detection in CE-LIF for STR DNA typing.

    Science.gov (United States)

    Woldegebriel, Michael; van Asten, Arian; Kloosterman, Ate; Vivó-Truyols, Gabriel

    2017-07-01

    In this work, we present a novel probabilistic peak detection algorithm based on a Bayesian framework for forensic DNA analysis. The proposed method aims at an exhaustive use of raw electropherogram data from a laser-induced fluorescence multi-CE system. As the raw data are informative up to a single data point, the conventional threshold-based approaches discard relevant forensic information early in the data analysis pipeline. Our proposed method assigns a posterior probability reflecting the data point's relevance with respect to peak detection criteria. Peaks of low intensity generated from a truly existing allele can thus constitute evidential value instead of fully discarding them and contemplating a potential allele drop-out. This way of working utilizes the information available within each individual data point and thus avoids making early (binary) decisions on the data analysis that can lead to error propagation. The proposed method was tested and compared to the application of a set threshold as is current practice in forensic STR DNA profiling. The new method was found to yield a significant improvement in the number of alleles identified, regardless of the peak heights and deviation from Gaussian shape. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Standardized approach for developing probabilistic exposure factor distributions

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, Randy L.; McKone, Thomas E.; Sohn, Michael D.

    2003-03-01

    The effectiveness of a probabilistic risk assessment (PRA) depends critically on the quality of input information that is available to the risk assessor and specifically on the probabilistic exposure factor distributions that are developed and used in the exposure and risk models. Deriving probabilistic distributions for model inputs can be time consuming and subjective. The absence of a standard approach for developing these distributions can result in PRAs that are inconsistent and difficult to review by regulatory agencies. We present an approach that reduces subjectivity in the distribution development process without limiting the flexibility needed to prepare relevant PRAs. The approach requires two steps. First, we analyze data pooled at a population scale to (1) identify the most robust demographic variables within the population for a given exposure factor, (2) partition the population data into subsets based on these variables, and (3) construct archetypal distributions for each subpopulation. Second, we sample from these archetypal distributions according to site- or scenario-specific conditions to simulate exposure factor values and use these values to construct the scenario-specific input distribution. It is envisaged that the archetypal distributions from step 1 will be generally applicable so risk assessors will not have to repeatedly collect and analyze raw data for each new assessment. We demonstrate the approach for two commonly used exposure factors--body weight (BW) and exposure duration (ED)--using data for the U.S. population. For these factors we provide a first set of subpopulation based archetypal distributions along with methodology for using these distributions to construct relevant scenario-specific probabilistic exposure factor distributions.

  19. Evaluation of seismic reliability of steel moment resisting frames rehabilitated by concentric braces with probabilistic models

    Directory of Open Access Journals (Sweden)

    Fateme Rezaei

    2017-08-01

    Full Text Available Probability of structure failure which has been designed by "deterministic methods" can be more than the one which has been designed in similar situation using probabilistic methods and models considering "uncertainties". The main purpose of this research was to evaluate the seismic reliability of steel moment resisting frames rehabilitated with concentric braces by probabilistic models. To do so, three-story and nine-story steel moment resisting frames were designed based on resistant criteria of Iranian code and then they were rehabilitated based on controlling drift limitations by concentric braces. Probability of frames failure was evaluated by probabilistic models of magnitude, location of earthquake, ground shaking intensity in the area of the structure, probabilistic model of building response (based on maximum lateral roof displacement and probabilistic methods. These frames were analyzed under subcrustal source by sampling probabilistic method "Risk Tools" (RT. Comparing the exceedance probability of building response curves (or selected points on it of the three-story and nine-story model frames (before and after rehabilitation, seismic response of rehabilitated frames, was reduced and their reliability was improved. Also the main effective variables in reducing the probability of frames failure were determined using sensitivity analysis by FORM probabilistic method. The most effective variables reducing the probability of frames failure are  in the magnitude model, ground shaking intensity model error and magnitude model error

  20. An approximate methods approach to probabilistic structural analysis

    Science.gov (United States)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.

  1. Probabilistic methods for rotordynamics analysis

    Science.gov (United States)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  2. Automating 3D reconstruction using a probabilistic grammar

    Science.gov (United States)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2015-10-01

    3D reconstruction of objects from point clouds with a laser scanner is still a laborious task in many applications. Automating 3D process is an ongoing research topic and suffers from the complex structure of the data. The main difficulty is due to lack of knowledge of real world objects structure. In this paper, we accumulate such structure knowledge by a probabilistic grammar learned from examples in the same category. The rules of the grammar capture compositional structures at different levels, and a feature dependent probability function is attached for every rule. The learned grammar can be used to parse new 3D point clouds, organize segment patches in a hierarchal way, and assign them meaningful labels. The parsed semantics can be used to guide the reconstruction algorithms automatically. Some examples are given to explain the method.

  3. Growing hierarchical probabilistic self-organizing graphs.

    Science.gov (United States)

    López-Rubio, Ezequiel; Palomo, Esteban José

    2011-07-01

    Since the introduction of the growing hierarchical self-organizing map, much work has been done on self-organizing neural models with a dynamic structure. These models allow adjusting the layers of the model to the features of the input dataset. Here we propose a new self-organizing model which is based on a probabilistic mixture of multivariate Gaussian components. The learning rule is derived from the stochastic approximation framework, and a probabilistic criterion is used to control the growth of the model. Moreover, the model is able to adapt to the topology of each layer, so that a hierarchy of dynamic graphs is built. This overcomes the limitations of the self-organizing maps with a fixed topology, and gives rise to a faithful visualization method for high-dimensional data.

  4. A multi-objective approach to the assignment of stock keeping units to unidirectional picking lines

    Directory of Open Access Journals (Sweden)

    Le Roux, G. J.

    2017-05-01

    Full Text Available An order picking system in a distribution centre consisting of parallel unidirectional picking lines is considered. The objectives are to minimise the walking distance of the pickers, the largest volume of stock on a picking line over all picking lines, the number of small packages, and the total penalty incurred for late distributions. The problem is formulated as a multi-objective multiple knapsack problem that is not solvable in a realistic time. Population-based algorithms, including the artificial bee colony algorithm and the genetic algorithm, are also implemented. The results obtained from all algorithms indicate a substantial improvement on all objectives relative to historical assignments. The genetic algorithm delivers the best performance.

  5. Probabilistic finite elements for fracture mechanics

    Science.gov (United States)

    Besterfield, Glen

    1988-01-01

    The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.

  6. Bisimulations Meet PCTL Equivalences for Probabilistic Automata

    DEFF Research Database (Denmark)

    Song, Lei; Zhang, Lijun; Godskesen, Jens Chr.

    2011-01-01

    Probabilistic automata (PA) [20] have been successfully applied in the formal verification of concurrent and stochastic systems. Efficient model checking algorithms have been studied, where the most often used logics for expressing properties are based on PCTL [11] and its extension PCTL∗ [4...

  7. Probabilistic Mu-Calculus

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Mardare, Radu Iulian; Xue, Bingtian

    2016-01-01

    We introduce a version of the probabilistic µ-calculus (PMC) built on top of a probabilistic modal logic that allows encoding n-ary inequational conditions on transition probabilities. PMC extends previously studied calculi and we prove that, despite its expressiveness, it enjoys a series of good...... metaproperties. Firstly, we prove the decidability of satisfiability checking by establishing the small model property. An algorithm for deciding the satisfiability problem is developed. As a second major result, we provide a complete axiomatization for the alternation-free fragment of PMC. The completeness proof...

  8. A Unified Probabilistic Framework for Dose–Response Assessment of Human Health Effects

    Science.gov (United States)

    Slob, Wout

    2015-01-01

    Background When chemical health hazards have been identified, probabilistic dose–response assessment (“hazard characterization”) quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. Objectives We developed a unified framework for probabilistic dose–response assessment. Methods We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose–response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, “effect metrics” can be specified to define “toxicologically equivalent” sizes for this underlying individual response; and d) dose–response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose–response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Results Probabilistically derived exposure limits are based on estimating a “target human dose” (HDMI), which requires risk management–informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%–10% effect sizes. Conclusions Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk

  9. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan; Naous, Rawan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2015-01-01

    . Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards

  10. Fast algorithm for probabilistic bone edge detection (FAPBED)

    Science.gov (United States)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean

  11. Aggregated wind power generation probabilistic forecasting based on particle filter

    International Nuclear Information System (INIS)

    Li, Pai; Guan, Xiaohong; Wu, Jiang

    2015-01-01

    Highlights: • A new method for probabilistic forecasting of aggregated wind power generation. • A dynamic system is established based on a numerical weather prediction model. • The new method handles the non-Gaussian and time-varying wind power uncertainties. • Particle filter is applied to forecast predictive densities of wind generation. - Abstract: Probability distribution of aggregated wind power generation in a region is one of important issues for power system daily operation. This paper presents a novel method to forecast the predictive densities of the aggregated wind power generation from several geographically distributed wind farms, considering the non-Gaussian and non-stationary characteristics in wind power uncertainties. Based on a mesoscale numerical weather prediction model, a dynamic system is established to formulate the relationship between the atmospheric and near-surface wind fields of geographically distributed wind farms. A recursively backtracking framework based on the particle filter is applied to estimate the atmospheric state with the near-surface wind power generation measurements, and to forecast the possible samples of the aggregated wind power generation. The predictive densities of the aggregated wind power generation are then estimated based on these predicted samples by a kernel density estimator. In case studies, the new method presented is tested on a 9 wind farms system in Midwestern United States. The testing results that the new method can provide competitive interval forecasts for the aggregated wind power generation with conventional statistical based models, which validates the effectiveness of the new method

  12. Formalizing Probabilistic Safety Claims

    Science.gov (United States)

    Herencia-Zapana, Heber; Hagen, George E.; Narkawicz, Anthony J.

    2011-01-01

    A safety claim for a system is a statement that the system, which is subject to hazardous conditions, satisfies a given set of properties. Following work by John Rushby and Bev Littlewood, this paper presents a mathematical framework that can be used to state and formally prove probabilistic safety claims. It also enables hazardous conditions, their uncertainties, and their interactions to be integrated into the safety claim. This framework provides a formal description of the probabilistic composition of an arbitrary number of hazardous conditions and their effects on system behavior. An example is given of a probabilistic safety claim for a conflict detection algorithm for aircraft in a 2D airspace. The motivation for developing this mathematical framework is that it can be used in an automated theorem prover to formally verify safety claims.

  13. Compression of Probabilistic XML documents

    NARCIS (Netherlands)

    Veldman, Irma

    2009-01-01

    Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

  14. ENSO-based probabilistic forecasts of March-May U.S. tornado and hail activity

    Science.gov (United States)

    Lepore, Chiara; Tippett, Michael K.; Allen, John T.

    2017-09-01

    Extended logistic regression is used to predict March-May severe convective storm (SCS) activity based on the preceding December-February (DJF) El Niño-Southern Oscillation (ENSO) state. The spatially resolved probabilistic forecasts are verified against U.S. tornado counts, hail events, and two environmental indices for severe convection. The cross-validated skill is positive for roughly a quarter of the U.S. Overall, indices are predicted with more skill than are storm reports, and hail events are predicted with more skill than tornado counts. Skill is higher in the cool phase of ENSO (La Niña like) when overall SCS activity is higher. SCS forecasts based on the predicted DJF ENSO state from coupled dynamical models initialized in October of the previous year extend the lead time with only a modest reduction in skill compared to forecasts based on the observed DJF ENSO state.

  15. Probabilistic fuzzy systems as additive fuzzy systems

    NARCIS (Netherlands)

    Almeida, R.J.; Verbeek, N.; Kaymak, U.; Costa Sousa, da J.M.; Laurent, A.; Strauss, O.; Bouchon-Meunier, B.; Yager, R.

    2014-01-01

    Probabilistic fuzzy systems combine a linguistic description of the system behaviour with statistical properties of data. It was originally derived based on Zadeh’s concept of probability of a fuzzy event. Two possible and equivalent additive reasoning schemes were proposed, that lead to the

  16. Empirical study of self-configuring genetic programming algorithm performance and behaviour

    International Nuclear Information System (INIS)

    KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F. Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Semenkin, E; KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F. Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Semenkina, M

    2015-01-01

    The behaviour of the self-configuring genetic programming algorithm with a modified uniform crossover operator that implements a selective pressure on the recombination stage, is studied over symbolic programming problems. The operator's probabilistic rates interplay is studied and the role of operator variants on algorithm performance is investigated. Algorithm modifications based on the results of investigations are suggested. The performance improvement of the algorithm is demonstrated by the comparative analysis of suggested algorithms on the benchmark and real world problems

  17. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    Science.gov (United States)

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  18. Probabilistic Infinite Secret Sharing

    OpenAIRE

    Csirmaz, László

    2013-01-01

    The study of probabilistic secret sharing schemes using arbitrary probability spaces and possibly infinite number of participants lets us investigate abstract properties of such schemes. It highlights important properties, explains why certain definitions work better than others, connects this topic to other branches of mathematics, and might yield new design paradigms. A probabilistic secret sharing scheme is a joint probability distribution of the shares and the secret together with a colle...

  19. Probabilistic Harmonic Analysis on Distributed Photovoltaic Integration Considering Typical Weather Scenarios

    Science.gov (United States)

    Bin, Che; Ruoying, Yu; Dongsheng, Dang; Xiangyan, Wang

    2017-05-01

    Distributed Generation (DG) integrating to the network would cause the harmonic pollution which would cause damages on electrical devices and affect the normal operation of power system. On the other hand, due to the randomness of the wind and solar irradiation, the output of DG is random, too, which leads to an uncertainty of the harmonic generated by the DG. Thus, probabilistic methods are needed to analyse the impacts of the DG integration. In this work we studied the harmonic voltage probabilistic distribution and the harmonic distortion in distributed network after the distributed photovoltaic (DPV) system integrating in different weather conditions, mainly the sunny day, cloudy day, rainy day and the snowy day. The probabilistic distribution function of the DPV output power in different typical weather conditions could be acquired via the parameter identification method of maximum likelihood estimation. The Monte-Carlo simulation method was adopted to calculate the probabilistic distribution of harmonic voltage content at different frequency orders as well as the harmonic distortion (THD) in typical weather conditions. The case study was based on the IEEE33 system and the results of harmonic voltage content probabilistic distribution as well as THD in typical weather conditions were compared.

  20. OCA-P, a deterministic and probabilistic fracture-mechanics code for application to pressure vessels

    International Nuclear Information System (INIS)

    Cheverton, R.D.; Ball, D.G.

    1984-05-01

    The OCA-P code is a probabilistic fracture-mechanics code that was prepared specifically for evaluating the integrity of pressurized-water reactor vessels when subjected to overcooling-accident loading conditions. The code has two-dimensional- and some three-dimensional-flaw capability; it is based on linear-elastic fracture mechanics; and it can treat cladding as a discrete region. Both deterministic and probabilistic analyses can be performed. For the former analysis, it is possible to conduct a search for critical values of the fluence and the nil-ductility reference temperature corresponding to incipient initiation of the initial flaw. The probabilistic portion of OCA-P is based on Monte Carlo techniques, and simulated parameters include fluence, flaw depth, fracture toughness, nil-ductility reference temperature, and concentrations of copper, nickel, and phosphorous. Plotting capabilities include the construction of critical-crack-depth diagrams (deterministic analysis) and various histograms (probabilistic analysis)

  1. Using HL7 in hospital staff assignments.

    Science.gov (United States)

    Unluturk, Mehmet S

    2014-02-01

    Hospital staff assignments are the instructions that allocate the hospital staff members to the hospital beds. Currently, hospital administrators make the assignments without accessing the information regarding the occupancy of the hospital beds and the acuity of the patient. As a result, administrators cannot distinguish between occupied and unoccupied beds, and may therefore assign staff to unoccupied beds. This gives rise to uneven and inefficient staff assignments. In this paper, the hospital admission-discharge-transfer (ADT) system is employed both as a data source and an assignment device to create staff assignments. When the patient data is newly added or modified, the ADT system updates the assignment software client with the relevant data. Based on the relevant data, the assignment software client is able to construct staff assignments in a more efficient way. © 2013 Elsevier Ltd. All rights reserved.

  2. Probabilistic brains: knowns and unknowns

    Science.gov (United States)

    Pouget, Alexandre; Beck, Jeffrey M; Ma, Wei Ji; Latham, Peter E

    2015-01-01

    There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference. PMID:23955561

  3. Probabilistic simple sticker systems

    Science.gov (United States)

    Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2017-04-01

    A model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, was introduced by by L. Kari, G. Paun, G. Rozenberg, A. Salomaa, and S. Yu in the paper entitled DNA computing, sticker systems and universality from the journal of Acta Informatica vol. 35, pp. 401-420 in the year 1998. A sticker system uses the Watson-Crick complementary feature of DNA molecules: starting from the incomplete double stranded sequences, and iteratively using sticking operations until a complete double stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rules generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of sticker systems. Recently, a variant of restricted sticker systems, called probabilistic sticker systems, has been introduced [4]. In this variant, the probabilities are initially associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings in the computation of the string. Strings for the language are selected according to some probabilistic requirements. In this paper, we study fundamental properties of probabilistic simple sticker systems. We prove that the probabilistic enhancement increases the computational power of simple sticker systems.

  4. Probabilistic forecasting and Bayesian data assimilation

    CERN Document Server

    Reich, Sebastian

    2015-01-01

    In this book the authors describe the principles and methods behind probabilistic forecasting and Bayesian data assimilation. Instead of focusing on particular application areas, the authors adopt a general dynamical systems approach, with a profusion of low-dimensional, discrete-time numerical examples designed to build intuition about the subject. Part I explains the mathematical framework of ensemble-based probabilistic forecasting and uncertainty quantification. Part II is devoted to Bayesian filtering algorithms, from classical data assimilation algorithms such as the Kalman filter, variational techniques, and sequential Monte Carlo methods, through to more recent developments such as the ensemble Kalman filter and ensemble transform filters. The McKean approach to sequential filtering in combination with coupling of measures serves as a unifying mathematical framework throughout Part II. Assuming only some basic familiarity with probability, this book is an ideal introduction for graduate students in ap...

  5. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    Science.gov (United States)

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p cranial nerves. Probabilistic tracking with a gradual

  6. Probabilistic structural damage identification based on vibration data

    International Nuclear Information System (INIS)

    Hao, H.; Xia, Y.

    2001-01-01

    Vibration-based methods are being rapidly developed and applied to detect structural damage in civil, mechanical and aerospace engineering communities in the last two decades. But uncertainties existing in the structural model and measured vibration data might lead to unreliable results. This paper will present some recent research results to tackle the above mentioned uncertainty problems. By assuming each of the FE model parameters and measured vibration data as a normally distributed random variable, a probabilistic damage detection procedure is developed based on perturbation method and validated by Monte Carlo simulation technique. With this technique, the damage probability of each structural element can be determined. The method developed has been verified by applying it to identify the damages of laboratory tested structures. It was proven that, as compared to the deterministic damage identification method, the present method can not only reduce the possibility of false identification, but also give the identification results in terms of probability. which is deemed more realistic and practical in detecting possible damages in a structure. It has also been found that the modal data included in damage identification analysis have a great influence on the identification results. With a sensitivity study, an optimal measurement set for damage detection is determined. This set includes the optimal measurement locations and the most appropriate modes that should be used in the damage identification analysis. Numerical results indicated that if the optimal set determined in a pre-analysis is used in the damage detection better results will be achieved. (author)

  7. Probabilistic Design and Analysis Framework

    Science.gov (United States)

    Strack, William C.; Nagpal, Vinod K.

    2010-01-01

    PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.

  8. Probabilistic Role Models and the Guarded Fragment

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2004-01-01

    We propose a uniform semantic framework for interpreting probabilistic concept subsumption and probabilistic role quantification through statistical sampling distributions. This general semantic principle serves as the foundation for the development of a probabilistic version of the guarded fragm...... fragment of first-order logic. A characterization of equivalence in that logic in terms of bisimulations is given....

  9. Probabilistic role models and the guarded fragment

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    We propose a uniform semantic framework for interpreting probabilistic concept subsumption and probabilistic role quantification through statistical sampling distributions. This general semantic principle serves as the foundation for the development of a probabilistic version of the guarded fragm...... fragment of first-order logic. A characterization of equivalence in that logic in terms of bisimulations is given....

  10. A tiered approach for probabilistic ecological risk assessment of contaminated sites

    International Nuclear Information System (INIS)

    Zolezzi, M.; Nicolella, C.; Tarazona, J.V.

    2005-01-01

    This paper presents a tiered methodology for probabilistic ecological risk assessment. The proposed approach starts from deterministic comparison (ratio) of single exposure concentration and threshold or safe level calculated from a dose-response relationship, goes through comparison of probabilistic distributions that describe exposure values and toxicological responses of organisms to the chemical of concern, and finally determines the so called distribution-based quotients (DBQs). In order to illustrate the proposed approach, soil concentrations of 1,2,4-trichlorobenzene (1,2,4- TCB) measured in an industrial contaminated site were used for site-specific probabilistic ecological risks assessment. By using probabilistic distributions, the risk, which exceeds a level of concern for soil organisms with the deterministic approach, is associated to the presence of hot spots reaching concentrations able to affect acutely more than 50% of the soil species, while the large majority of the area presents 1,2,4- TCB concentrations below those reported as toxic [it

  11. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    Science.gov (United States)

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for

  13. De novo clustering methods outperform reference-based methods for assigning 16S rRNA gene sequences to operational taxonomic units

    Directory of Open Access Journals (Sweden)

    Sarah L. Westcott

    2015-12-01

    Full Text Available Background. 16S rRNA gene sequences are routinely assigned to operational taxonomic units (OTUs that are then used to analyze complex microbial communities. A number of methods have been employed to carry out the assignment of 16S rRNA gene sequences to OTUs leading to confusion over which method is optimal. A recent study suggested that a clustering method should be selected based on its ability to generate stable OTU assignments that do not change as additional sequences are added to the dataset. In contrast, we contend that the quality of the OTU assignments, the ability of the method to properly represent the distances between the sequences, is more important.Methods. Our analysis implemented six de novo clustering algorithms including the single linkage, complete linkage, average linkage, abundance-based greedy clustering, distance-based greedy clustering, and Swarm and the open and closed-reference methods. Using two previously published datasets we used the Matthew’s Correlation Coefficient (MCC to assess the stability and quality of OTU assignments.Results. The stability of OTU assignments did not reflect the quality of the assignments. Depending on the dataset being analyzed, the average linkage and the distance and abundance-based greedy clustering methods generated OTUs that were more likely to represent the actual distances between sequences than the open and closed-reference methods. We also demonstrated that for the greedy algorithms VSEARCH produced assignments that were comparable to those produced by USEARCH making VSEARCH a viable free and open source alternative to USEARCH. Further interrogation of the reference-based methods indicated that when USEARCH or VSEARCH were used to identify the closest reference, the OTU assignments were sensitive to the order of the reference sequences because the reference sequences can be identical over the region being considered. More troubling was the observation that while both USEARCH and

  14. Probabilistic Programming (Invited Talk)

    OpenAIRE

    Yang, Hongseok

    2017-01-01

    Probabilistic programming refers to the idea of using standard programming constructs for specifying probabilistic models from machine learning and statistics, and employing generic inference algorithms for answering various queries on these models, such as posterior inference and estimation of model evidence. Although this idea itself is not new and was, in fact, explored by several programming-language and statistics researchers in the early 2000, it is only in the last few years that proba...

  15. Efficacy of a web-based intelligent tutoring system for communicating genetic risk of breast cancer: a fuzzy-trace theory approach.

    Science.gov (United States)

    Wolfe, Christopher R; Reyna, Valerie F; Widmer, Colin L; Cedillos, Elizabeth M; Fisher, Christopher R; Brust-Renck, Priscila G; Weil, Audrey M

    2015-01-01

    . Many healthy women consider genetic testing for breast cancer risk, yet BRCA testing issues are complex. . To determine whether an intelligent tutor, BRCA Gist, grounded in fuzzy-trace theory (FTT), increases gist comprehension and knowledge about genetic testing for breast cancer risk, improving decision making. . In 2 experiments, 410 healthy undergraduate women were randomly assigned to 1 of 3 groups: an online module using a Web-based tutoring system (BRCA Gist) that uses artificial intelligence technology, a second group read highly similar content from the National Cancer Institute (NCI) Web site, and a third that completed an unrelated tutorial. . BRCA Gist applied FTT and was designed to help participants develop gist comprehension of topics relevant to decisions about BRCA genetic testing, including how breast cancer spreads, inherited genetic mutations, and base rates. . We measured content knowledge, gist comprehension of decision-relevant information, interest in testing, and genetic risk and testing judgments. . Control knowledge scores ranged from 54% to 56%, NCI improved significantly to 65% and 70%, and BRCA Gist improved significantly more to 75% and 77%, P tutors, such as BRCA Gist, are scalable, cost-effective ways of helping people understand complex issues, improving decision making. © The Author(s) 2014.

  16. Model checking optimal finite-horizon control for probabilistic gene regulatory networks.

    Science.gov (United States)

    Wei, Ou; Guo, Zonghao; Niu, Yun; Liao, Wenyuan

    2017-12-14

    Probabilistic Boolean networks (PBNs) have been proposed for analyzing external control in gene regulatory networks with incorporation of uncertainty. A context-sensitive PBN with perturbation (CS-PBNp), extending a PBN with context-sensitivity to reflect the inherent biological stability and random perturbations to express the impact of external stimuli, is considered to be more suitable for modeling small biological systems intervened by conditions from the outside. In this paper, we apply probabilistic model checking, a formal verification technique, to optimal control for a CS-PBNp that minimizes the expected cost over a finite control horizon. We first describe a procedure of modeling a CS-PBNp using the language provided by a widely used probabilistic model checker PRISM. We then analyze the reward-based temporal properties and the computation in probabilistic model checking; based on the analysis, we provide a method to formulate the optimal control problem as minimum reachability reward properties. Furthermore, we incorporate control and state cost information into the PRISM code of a CS-PBNp such that automated model checking a minimum reachability reward property on the code gives the solution to the optimal control problem. We conduct experiments on two examples, an apoptosis network and a WNT5A network. Preliminary experiment results show the feasibility and effectiveness of our approach. The approach based on probabilistic model checking for optimal control avoids explicit computation of large-size state transition relations associated with PBNs. It enables a natural depiction of the dynamics of gene regulatory networks, and provides a canonical form to formulate optimal control problems using temporal properties that can be automated solved by leveraging the analysis power of underlying model checking engines. This work will be helpful for further utilization of the advances in formal verification techniques in system biology.

  17. Probabilistic safety assessment based expert systems in support of dynamic risk assessment

    International Nuclear Information System (INIS)

    Varde, P.V.; Sharma, U.L.; Marik, S.K.; Raina, V.K.; Tikku, A.C.

    2006-01-01

    Probabilistic Safety Assessment (PSA) studies are being performed, world over as part of integrated risk assessment for Nuclear Power Plants and in many cases PSA insight is utilized in support of decision making. Though the modern plants are built with inherent safety provisions, particularly to reduce the supervisory requirements during initial period into the accident, it is always desired to develop an efficient user friendly real-time operator advisory system for handling of plant transients/emergencies which would be of immense benefit for the enhancement of operational safety of the plant. This paper discusses an integrated approach for the development of operator support system. In this approach, PSA methodology and the insight obtained from PSA has been utilized for development of knowledge based or rule based experts system. While Artificial Neural Network (ANN) approach has been employed for transient identification, rule-base expert system shell environment was used for the development of diagnostic module in this system. Attempt has been made to demonstrate that this approach offers an efficient framework for addressing requirements related to handling of real-time/dynamic scenario. (author)

  18. Probabilistic Harmonic Modeling of Wind Power Plants

    DEFF Research Database (Denmark)

    Guest, Emerson; Jensen, Kim H.; Rasmussen, Tonny Wederberg

    2017-01-01

    A probabilistic sequence domain (SD) harmonic model of a grid-connected voltage-source converter is used to estimate harmonic emissions in a wind power plant (WPP) comprised of Type-IV wind turbines. The SD representation naturally partitioned converter generated voltage harmonics into those...... with deterministic phase and those with probabilistic phase. A case study performed on a string of ten 3MW, Type-IV wind turbines implemented in PSCAD was used to verify the probabilistic SD harmonic model. The probabilistic SD harmonic model can be employed in the planning phase of WPP projects to assess harmonic...

  19. Topics in Probabilistic Judgment Aggregation

    Science.gov (United States)

    Wang, Guanchun

    2011-01-01

    This dissertation is a compilation of several studies that are united by their relevance to probabilistic judgment aggregation. In the face of complex and uncertain events, panels of judges are frequently consulted to provide probabilistic forecasts, and aggregation of such estimates in groups often yield better results than could have been made…

  20. A probabilistic approach to delineating functional brain regions

    DEFF Research Database (Denmark)

    Kalbitzer, Jan; Svarer, Claus; Frokjaer, Vibe G

    2009-01-01

    The purpose of this study was to develop a reliable observer-independent approach to delineating volumes of interest (VOIs) for functional brain regions that are not identifiable on structural MR images. The case is made for the raphe nuclei, a collection of nuclei situated in the brain stem known...... to be densely packed with serotonin transporters (5-hydroxytryptaminic [5-HTT] system). METHODS: A template set for the raphe nuclei, based on their high content of 5-HTT as visualized in parametric (11)C-labeled 3-amino-4-(2-dimethylaminomethyl-phenylsulfanyl)-benzonitrile PET images, was created for 10...... healthy subjects. The templates were subsequently included in the region sets used in a previously published automatic MRI-based approach to create an observer- and activity-independent probabilistic VOI map. The probabilistic map approach was tested in a different group of 10 subjects and compared...

  1. A risk-based classification scheme for genetically modified foods. III: Evaluation using a panel of reference foods.

    Science.gov (United States)

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    This paper presents an exploratory evaluation of four functional components of a proposed risk-based classification scheme (RBCS) for crop-derived genetically modified (GM) foods in a concordance study. Two independent raters assigned concern levels to 20 reference GM foods using a rating form based on the proposed RBCS. The four components of evaluation were: (1) degree of concordance, (2) distribution across concern levels, (3) discriminating ability of the scheme, and (4) ease of use. At least one of the 20 reference foods was assigned to each of the possible concern levels, demonstrating the ability of the scheme to identify GM foods of different concern with respect to potential health risk. There was reasonably good concordance between the two raters for the three separate parts of the RBCS. The raters agreed that the criteria in the scheme were sufficiently clear in discriminating reference foods into different concern levels, and that with some experience, the scheme was reasonably easy to use. Specific issues and suggestions for improvements identified in the concordance study are discussed.

  2. Behavioral genetics and criminal responsibility at the courtroom.

    Science.gov (United States)

    Tatarelli, Roberto; Del Casale, Antonio; Tatarelli, Caterina; Serata, Daniele; Rapinesi, Chiara; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo

    2014-04-01

    Several questions arise from the recent use of behavioral genetic research data in the courtroom. Ethical issues concerning the influence of biological factors on human free will, must be considered when specific gene patterns are advocated to constrain court's judgment, especially regarding violent crimes. Aggression genetics studies are both difficult to interpret and inconsistent, hence, in the absence of a psychiatric diagnosis, genetic data are currently difficult to prioritize in the courtroom. The judge's probabilistic considerations in formulating a sentence must take into account causality, and the latter cannot be currently ensured by genetic data. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Investigating Students' Use and Adoption of "With-Video Assignments": Lessons Learnt for Video-Based Open Educational Resources

    Science.gov (United States)

    Pappas, Ilias O.; Giannakos, Michail N.; Mikalef, Patrick

    2017-01-01

    The use of video-based open educational resources is widespread, and includes multiple approaches to implementation. In this paper, the term "with-video assignments" is introduced to portray video learning resources enhanced with assignments. The goal of this study is to examine the factors that influence students' intention to adopt…

  4. Real life working shift assignment problem

    Science.gov (United States)

    Sze, San-Nah; Kwek, Yeek-Ling; Tiong, Wei-King; Chiew, Kang-Leng

    2017-07-01

    This study concerns about the working shift assignment in an outlet of Supermarket X in Eastern Mall, Kuching. The working shift assignment needs to be solved at least once in every month. Current approval process of working shifts is too troublesome and time-consuming. Furthermore, the management staff cannot have an overview of manpower and working shift schedule. Thus, the aim of this study is to develop working shift assignment simulation and propose a working shift assignment solution. The main objective for this study is to fulfill manpower demand at minimum operation cost. Besides, the day off and meal break policy should be fulfilled accordingly. Demand based heuristic is proposed to assign working shift and the quality of the solution is evaluated by using the real data.

  5. Application of probabilistic precipitation forecasts from a ...

    African Journals Online (AJOL)

    Application of probabilistic precipitation forecasts from a deterministic model towards increasing the lead-time of flash flood forecasts in South Africa. ... The procedure is applied to a real flash flood event and the ensemble-based rainfall forecasts are verified against rainfall estimated by the SAFFG system. The approach ...

  6. Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods

    International Nuclear Information System (INIS)

    Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli

    2016-01-01

    Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid

  7. Probabilistic Fatigue Analysis of Jacket Support Structures for Offshore Wind Turbines Exemplified on Tubular Joints

    OpenAIRE

    Kelma, Sebastian; Schaumann, Peter

    2015-01-01

    The design of offshore wind turbines is usually based on the semi-probabilistic safety concept. Using probabilistic methods, the aim is to find an advanced structural design of OWTs in order to improve safety and reduce costs. The probabilistic design is exemplified on tubular joints of a jacket substructure. Loads and resistance are considered by their respective probability distributions. Time series of loads are generated by fully-coupled numerical simulation of the offshore wind turbine. ...

  8. Undiscovered porphyry copper resources in the Urals—A probabilistic mineral resource assessment

    Science.gov (United States)

    Hammarstrom, Jane M.; Mihalasky, Mark J.; Ludington, Stephen; Phillips, Jeffrey; Berger, Byron R.; Denning, Paul; Dicken, Connie; Mars, John; Zientek, Michael L.; Herrington, Richard J.; Seltmann, Reimar

    2017-01-01

    A probabilistic mineral resource assessment of metal resources in undiscovered porphyry copper deposits of the Ural Mountains in Russia and Kazakhstan was done using a quantitative form of mineral resource assessment. Permissive tracts were delineated on the basis of mapped and inferred subsurface distributions of igneous rocks assigned to tectonic zones that include magmatic arcs where the occurrence of porphyry copper deposits within 1 km of the Earth's surface are possible. These permissive tracts outline four north-south trending volcano-plutonic belts in major structural zones of the Urals. From west to east, these include permissive lithologies for porphyry copper deposits associated with Paleozoic subduction-related island-arc complexes preserved in the Tagil and Magnitogorsk arcs, Paleozoic island-arc fragments and associated tonalite-granodiorite intrusions in the East Uralian zone, and Carboniferous continental-margin arcs developed on the Kazakh craton in the Transuralian zone. The tracts range from about 50,000 to 130,000 km2 in area. The Urals host 8 known porphyry copper deposits with total identified resources of about 6.4 million metric tons of copper, at least 20 additional porphyry copper prospect areas, and numerous copper-bearing skarns and copper occurrences.Probabilistic estimates predict a mean of 22 undiscovered porphyry copper deposits within the four permissive tracts delineated in the Urals. Combining estimates with established grade and tonnage models predicts a mean of 82 million metric tons of undiscovered copper. Application of an economic filter suggests that about half of that amount could be economically recoverable based on assumed depth distributions, availability of infrastructure, recovery rates, current metals prices, and investment environment.

  9. A Web-based Peer Assessment System for Assigning Student Scores in Cooperative Learning

    Directory of Open Access Journals (Sweden)

    Anon Sukstrienwong

    2017-11-01

    Full Text Available Working in groups has become increasingly important in order to develop students' skills. However, it can be more successful when peers cooperate and are involved in the assigned tasks. However, several educators firmly show disadvantages when all peers received the same reward, regardless of individual contribution. Some teachers also considering peer assessment to be time and effort consuming because preparation and monitoring are needed. In order to overcome these problems, we have developed a web-based peer assessment referred to as the ‘Scoring by Peer Assessment System’ (SPAS that allows teachers to set up the process of peer assessment, in order to assign scores that reflect the contribution of each student. Moreover, a web-based application allows students to evaluate their peers regarding their individual contribution where cooperative learning and peer assessment are used. The paper describes the system design and the implementation of our peer assessment application.

  10. Probabilistic models and machine learning in structural bioinformatics

    DEFF Research Database (Denmark)

    Hamelryck, Thomas

    2009-01-01

    . Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis...

  11. A monograph assignment as an integrative application of evidence-based medicine and pharmacoeconomic principles.

    Science.gov (United States)

    Law, Anandi V; Jackevicius, Cynthia A; Bounthavong, Mark

    2011-02-10

    To describe the development and assessment of monographs as an assignment to incorporate evidence-based medicine (EBM) and pharmacoeconomic principles into a third-year pharmacoeconomic course. Eight newly FDA-approved drugs were assigned to 16 teams of students, where each drug was assigned to 2 teams. Teams had to research their drug, write a professional monograph, deliver an oral presentation, and answer questions posed by faculty judges. One team was asked to present evidence for inclusion of the drug into a formulary, while another team presented evidence against inclusion. The teams' average score on the written report was 99.1%; on the oral presentation, 92.5%, and on the online quiz given at the end of the presentations, 77%. Monographs are a successful method of incorporating and integrating learning across different concepts, as well as increasing relevance of pharmacoeconomics in the PharmD curriculum.

  12. Probabilistic structural analysis to quantify uncertainties associated with turbopump blades

    Science.gov (United States)

    Nagpal, Vinod K.; Rubinstein, Robert; Chamis, Christos C.

    1987-01-01

    A probabilistic study of turbopump blades has been in progress at NASA Lewis Research Center for over the last two years. The objectives of this study are to evaluate the effects of uncertainties in geometry and material properties on the structural response of the turbopump blades to evaluate the tolerance limits on the design. A methodology based on probabilistic approach has been developed to quantify the effects of the random uncertainties. The results of this study indicate that only the variations in geometry have significant effects.

  13. Feedback-based probabilistic category learning is selectively impaired in attention/hyperactivity deficit disorder.

    Science.gov (United States)

    Gabay, Yafit; Goldfarb, Liat

    2017-07-01

    Although Attention-Deficit Hyperactivity Disorder (ADHD) is closely linked to executive function deficits, it has recently been attributed to procedural learning impairments that are quite distinct from the former. These observations challenge the ability of the executive function framework solely to account for the diverse range of symptoms observed in ADHD. A recent neurocomputational model emphasizes the role of striatal dopamine (DA) in explaining ADHD's broad range of deficits, but the link between this model and procedural learning impairments remains unclear. Significantly, feedback-based procedural learning is hypothesized to be disrupted in ADHD because of the involvement of striatal DA in this type of learning. In order to test this assumption, we employed two variants of a probabilistic category learning task known from the neuropsychological literature. Feedback-based (FB) and paired associate-based (PA) probabilistic category learning were employed in a non-medicated sample of ADHD participants and neurotypical participants. In the FB task, participants learned associations between cues and outcomes initially by guessing and subsequently through feedback indicating the correctness of the response. In the PA learning task, participants viewed the cue and its associated outcome simultaneously without receiving an overt response or corrective feedback. In both tasks, participants were trained across 150 trials. Learning was assessed in a subsequent test without a presentation of the outcome or corrective feedback. Results revealed an interesting disassociation in which ADHD participants performed as well as control participants in the PA task, but were impaired compared with the controls in the FB task. The learning curve during FB training differed between the two groups. Taken together, these results suggest that the ability to incrementally learn by feedback is selectively disrupted in ADHD participants. These results are discussed in relation to both

  14. Probabilistic studies of accident sequences

    International Nuclear Information System (INIS)

    Villemeur, A.; Berger, J.P.

    1986-01-01

    For several years, Electricite de France has carried out probabilistic assessment of accident sequences for nuclear power plants. In the framework of this program many methods were developed. As the interest in these studies was increasing and as adapted methods were developed, Electricite de France has undertaken a probabilistic safety assessment of a nuclear power plant [fr

  15. Automatic segmentation of coronary angiograms based on fuzzy inferring and probabilistic tracking

    Directory of Open Access Journals (Sweden)

    Shoujun Zhou

    2010-08-01

    Full Text Available Abstract Background Segmentation of the coronary angiogram is important in computer-assisted artery motion analysis or reconstruction of 3D vascular structures from a single-plan or biplane angiographic system. Developing fully automated and accurate vessel segmentation algorithms is highly challenging, especially when extracting vascular structures with large variations in image intensities and noise, as well as with variable cross-sections or vascular lesions. Methods This paper presents a novel tracking method for automatic segmentation of the coronary artery tree in X-ray angiographic images, based on probabilistic vessel tracking and fuzzy structure pattern inferring. The method is composed of two main steps: preprocessing and tracking. In preprocessing, multiscale Gabor filtering and Hessian matrix analysis were used to enhance and extract vessel features from the original angiographic image, leading to a vessel feature map as well as a vessel direction map. In tracking, a seed point was first automatically detected by analyzing the vessel feature map. Subsequently, two operators [e.g., a probabilistic tracking operator (PTO and a vessel structure pattern detector (SPD] worked together based on the detected seed point to extract vessel segments or branches one at a time. The local structure pattern was inferred by a multi-feature based fuzzy inferring function employed in the SPD. The identified structure pattern, such as crossing or bifurcation, was used to control the tracking process, for example, to keep tracking the current segment or start tracking a new one, depending on the detected pattern. Results By appropriate integration of these advanced preprocessing and tracking steps, our tracking algorithm is able to extract both vessel axis lines and edge points, as well as measure the arterial diameters in various complicated cases. For example, it can walk across gaps along the longitudinal vessel direction, manage varying vessel

  16. Convex sets in probabilistic normed spaces

    International Nuclear Information System (INIS)

    Aghajani, Asadollah; Nourouzi, Kourosh

    2008-01-01

    In this paper we obtain some results on convexity in a probabilistic normed space. We also investigate the concept of CSN-closedness and CSN-compactness in a probabilistic normed space and generalize the corresponding results of normed spaces

  17. Probabilistic finite elements

    Science.gov (United States)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  18. Development of Nuclear Safety Culture evaluation method for an operation team based on the probabilistic approach

    International Nuclear Information System (INIS)

    Han, Sang Min; Lee, Seung Min; Yim, Ho Bin; Seong, Poong Hyun

    2018-01-01

    Highlights: •We proposed a Probabilistic Safety Culture Healthiness Evaluation Method. •Positive relationship between the ‘success’ states of NSC and performance was shown. •The state probability profile showed a unique ratio regardless of the scenarios. •Cutset analysis provided not only root causes but also the latent causes of failures. •Pro-SCHEMe was found to be applicable to Korea NPPs. -- Abstract: The aim of this study is to propose a new quantitative evaluation method for Nuclear Safety Culture (NSC) in Nuclear Power Plant (NPP) operation teams based on the probabilistic approach. Various NSC evaluation methods have been developed, and the Korea NPP utility company has conducted the NSC assessment according to international practice. However, most of methods are conducted by interviews, observations, and the self-assessment. Consequently, the results are often qualitative, subjective, and mainly dependent on evaluator’s judgement, so the assessment results can be interpreted from different perspectives. To resolve limitations of present evaluation methods, the concept of Safety Culture Healthiness was suggested to produce quantitative results and provide faster evaluation process. This paper presents Probabilistic Safety Culture Healthiness Evaluation Method (Pro-SCHEMe) to generate quantitative inputs for Human Reliability Assessment (HRA) in Probabilistic Safety Assessment (PSA). Evaluation items which correspond to a basic event in PSA are derived in the first part of the paper through the literature survey; mostly from nuclear-related organizations such as the International Atomic Energy Agency (IAEA), the United States Nuclear Regulatory Commission (U.S.NRC), and the Institute of Nuclear Power Operations (INPO). Event trees (ETs) and fault trees (FTs) are devised to apply evaluation items to PSA based on the relationships among such items. The Modeling Guidelines are also suggested to classify and calculate NSC characteristics of

  19. Accuracy of Assignment of Atlantic Salmon (Salmo salar L.) to Rivers and Regions in Scotland and Northeast England Based on Single Nucleotide Polymorphism (SNP) Markers

    Science.gov (United States)

    Gilbey, John; Cauwelier, Eef; Coulson, Mark W.; Stradmeyer, Lee; Sampayo, James N.; Armstrong, Anja; Verspoor, Eric; Corrigan, Laura; Shelley, Jonathan; Middlemas, Stuart

    2016-01-01

    Understanding the habitat use patterns of migratory fish, such as Atlantic salmon (Salmo salar L.), and the natural and anthropogenic impacts on them, is aided by the ability to identify individuals to their stock of origin. Presented here are the results of an analysis of informative single nucleotide polymorphic (SNP) markers for detecting genetic structuring in Atlantic salmon in Scotland and NE England and their ability to allow accurate genetic stock identification. 3,787 fish from 147 sites covering 27 rivers were screened at 5,568 SNP markers. In order to identify a cost-effective subset of SNPs, they were ranked according to their ability to differentiate between fish from different rivers. A panel of 288 SNPs was used to examine both individual assignments and mixed stock fisheries and eighteen assignment units were defined. The results improved greatly on previously available methods and, for the first time, fish caught in the marine environment can be confidently assigned to geographically coherent units within Scotland and NE England, including individual rivers. As such, this SNP panel has the potential to aid understanding of the various influences acting upon Atlantic salmon on their marine migrations, be they natural environmental variations and/or anthropogenic impacts, such as mixed stock fisheries and interactions with marine power generation installations. PMID:27723810

  20. A probabilistic-based approach to monitoring tool wear state and assessing its effect on workpiece quality in nickel-based alloys

    Science.gov (United States)

    Akhavan Niaki, Farbod

    The objective of this research is first to investigate the applicability and advantage of statistical state estimation methods for predicting tool wear in machining nickel-based superalloys over deterministic methods, and second to study the effects of cutting tool wear on the quality of the part. Nickel-based superalloys are among those classes of materials that are known as hard-to-machine alloys. These materials exhibit a unique combination of maintaining their strength at high temperature and have high resistance to corrosion and creep. These unique characteristics make them an ideal candidate for harsh environments like combustion chambers of gas turbines. However, the same characteristics that make nickel-based alloys suitable for aggressive conditions introduce difficulties when machining them. High strength and low thermal conductivity accelerate the cutting tool wear and increase the possibility of the in-process tool breakage. A blunt tool nominally deteriorates the surface integrity and damages quality of the machined part by inducing high tensile residual stresses, generating micro-cracks, altering the microstructure or leaving a poor roughness profile behind. As a consequence in this case, the expensive superalloy would have to be scrapped. The current dominant solution for industry is to sacrifice the productivity rate by replacing the tool in the early stages of its life or to choose conservative cutting conditions in order to lower the wear rate and preserve workpiece quality. Thus, monitoring the state of the cutting tool and estimating its effects on part quality is a critical task for increasing productivity and profitability in machining superalloys. This work aims to first introduce a probabilistic-based framework for estimating tool wear in milling and turning of superalloys and second to study the detrimental effects of functional state of the cutting tool in terms of wear and wear rate on part quality. In the milling operation, the

  1. Probabilistic reasoning with graphical security models

    NARCIS (Netherlands)

    Kordy, Barbara; Pouly, Marc; Schweitzer, Patrick

    This work provides a computational framework for meaningful probabilistic evaluation of attack–defense scenarios involving dependent actions. We combine the graphical security modeling technique of attack–defense trees with probabilistic information expressed in terms of Bayesian networks. In order

  2. Probabilistic safety assessment for research reactors

    International Nuclear Information System (INIS)

    1986-12-01

    Increasing interest in using Probabilistic Safety Assessment (PSA) methods for research reactor safety is being observed in many countries throughout the world. This is mainly because of the great ability of this approach in achieving safe and reliable operation of research reactors. There is also a need to assist developing countries to apply Probabilistic Safety Assessment to existing nuclear facilities which are simpler and therefore less complicated to analyse than a large Nuclear Power Plant. It may be important, therefore, to develop PSA for research reactors. This might also help to better understand the safety characteristics of the reactor and to base any backfitting on a cost-benefit analysis which would ensure that only necessary changes are made. This document touches on all the key aspects of PSA but placed greater emphasis on so-called systems analysis aspects rather than the in-plant or ex-plant consequences

  3. Bayesian probabilistic network approach for managing earthquake risks of cities

    DEFF Research Database (Denmark)

    Bayraktarli, Yahya; Faber, Michael

    2011-01-01

    This paper considers the application of Bayesian probabilistic networks (BPNs) to large-scale risk based decision making in regard to earthquake risks. A recently developed risk management framework is outlined which utilises Bayesian probabilistic modelling, generic indicator based risk models...... and a fourth module on the consequences of an earthquake. Each of these modules is integrated into a BPN. Special attention is given to aggregated risk, i.e. the risk contribution from assets at multiple locations in a city subjected to the same earthquake. The application of the methodology is illustrated...... on an example considering a portfolio of reinforced concrete structures in a city located close to the western part of the North Anatolian Fault in Turkey....

  4. Modelling probabilistic fatigue crack propagation rates for a mild structural steel

    Directory of Open Access Journals (Sweden)

    J.A.F.O. Correia

    2015-01-01

    Full Text Available A class of fatigue crack growth models based on elastic–plastic stress–strain histories at the crack tip region and local strain-life damage models have been proposed in literature. The fatigue crack growth is regarded as a process of continuous crack initializations over successive elementary material blocks, which may be governed by smooth strain-life damage data. Some approaches account for the residual stresses developing at the crack tip in the actual crack driving force assessment, allowing mean stresses and loading sequential effects to be modelled. An extension of the fatigue crack propagation model originally proposed by Noroozi et al. (2005 to derive probabilistic fatigue crack propagation data is proposed, in particular concerning the derivation of probabilistic da/dN-ΔK-R fields. The elastic-plastic stresses at the vicinity of the crack tip, computed using simplified formulae, are compared with the stresses computed using an elasticplastic finite element analyses for specimens considered in the experimental program proposed to derive the fatigue crack propagation data. Using probabilistic strain-life data available for the S355 structural mild steel, probabilistic crack propagation fields are generated, for several stress ratios, and compared with experimental fatigue crack propagation data. A satisfactory agreement between the predicted probabilistic fields and experimental data is observed.

  5. Seismic vulnerability assessment of chemical plants through probabilistic neural networks

    International Nuclear Information System (INIS)

    Aoki, T.; Ceravolo, R.; De Stefano, A.; Genovese, C.; Sabia, D.

    2002-01-01

    A chemical industrial plant represents a sensitive presence in a region and, in case of severe damage due to earthquake actions, its impact on social life and environment can be devastating. From the structural point of view, chemical plants count a number of recurrent elements, which are classifiable in a discrete set of typological families (towers, chimneys, cylindrical or spherical or prismatic tanks, pipes etc.). The final aim of this work is to outline a general procedure to be followed in order to assign a seismic vulnerability estimate to each element of the various typological families. In this paper, F.E. simulations allowed to create a training set, which has been used to train a probabilistic neural system. A sample application has concerned the seismic vulnerability of simple spherical tanks

  6. A fuzzy-based reliability approach to evaluate basic events of fault tree analysis for nuclear power plant probabilistic safety assessment

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry

    2014-01-01

    Highlights: • We propose a fuzzy-based reliability approach to evaluate basic event reliabilities. • It implements the concepts of failure possibilities and fuzzy sets. • Experts evaluate basic event failure possibilities using qualitative words. • Triangular fuzzy numbers mathematically represent qualitative failure possibilities. • It is a very good alternative for conventional reliability approach. - Abstract: Fault tree analysis has been widely utilized as a tool for nuclear power plant probabilistic safety assessment. This analysis can be completed only if all basic events of the system fault tree have their quantitative failure rates or failure probabilities. However, it is difficult to obtain those failure data due to insufficient data, environment changing or new components. This study proposes a fuzzy-based reliability approach to evaluate basic events of system fault trees whose failure precise probability distributions of their lifetime to failures are not available. It applies the concept of failure possibilities to qualitatively evaluate basic events and the concept of fuzzy sets to quantitatively represent the corresponding failure possibilities. To demonstrate the feasibility and the effectiveness of the proposed approach, the actual basic event failure probabilities collected from the operational experiences of the David–Besse design of the Babcock and Wilcox reactor protection system fault tree are used to benchmark the failure probabilities generated by the proposed approach. The results confirm that the proposed fuzzy-based reliability approach arises as a suitable alternative for the conventional probabilistic reliability approach when basic events do not have the corresponding quantitative historical failure data for determining their reliability characteristics. Hence, it overcomes the limitation of the conventional fault tree analysis for nuclear power plant probabilistic safety assessment

  7. A probabilistic model-based soft sensor to monitor lactic acid bacteria fermentations

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2018-01-01

    A probabilistic soft sensor based on a mechanistic model was designed to monitor S. thermophilus fermentations, and validated with experimental lab-scale data. It considered uncertainties in the initial conditions, on-line measurements, and model parameters by performing Monte Carlo simulations...... the model parameters that were then used as input to the mechanistic model. The soft sensor predicted both the current state variables, as well as the future course of the fermentation, e.g. with a relative mean error of the biomass concentration of 8 %. This successful implementation of a process...... within the monitoring system. It predicted, therefore, the probability distributions of the unmeasured states, such as biomass, lactose, and lactic acid concentrations. To this end, a mechanistic model was developed first, and a statistical parameter estimation was performed in order to assess parameter...

  8. Consideration of aging in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Titina, B.; Cepin, M.

    2007-01-01

    Probabilistic safety assessment is a standardised tool for assessment of safety of nuclear power plants. It is a complement to the safety analyses. Standard probabilistic models of safety equipment assume component failure rate as a constant. Ageing of systems, structures and components can theoretically be included in new age-dependent probabilistic safety assessment, which generally causes the failure rate to be a function of age. New age-dependent probabilistic safety assessment models, which offer explicit calculation of the ageing effects, are developed. Several groups of components are considered which require their unique models: e.g. operating components e.g. stand-by components. The developed models on the component level are inserted into the models of the probabilistic safety assessment in order that the ageing effects are evaluated for complete systems. The preliminary results show that the lack of necessary data for consideration of ageing causes highly uncertain models and consequently the results. (author)

  9. Implications of probabilistic risk assessment

    International Nuclear Information System (INIS)

    Cullingford, M.C.; Shah, S.M.; Gittus, J.H.

    1987-01-01

    Probabilistic risk assessment (PRA) is an analytical process that quantifies the likelihoods, consequences and associated uncertainties of the potential outcomes of postulated events. Starting with planned or normal operation, probabilistic risk assessment covers a wide range of potential accidents and considers the whole plant and the interactions of systems and human actions. Probabilistic risk assessment can be applied in safety decisions in design, licensing and operation of industrial facilities, particularly nuclear power plants. The proceedings include a review of PRA procedures, methods and technical issues in treating uncertainties, operating and licensing issues and future trends. Risk assessment for specific reactor types or components and specific risks (eg aircraft crashing onto a reactor) are used to illustrate the points raised. All 52 articles are indexed separately. (U.K.)

  10. Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.

    Science.gov (United States)

    Mørk, Søren; Holmes, Ian

    2012-03-01

    Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.

  11. Branching bisimulation congruence for probabilistic systems

    NARCIS (Netherlands)

    Trcka, N.; Georgievska, S.; Aldini, A.; Baier, C.

    2008-01-01

    The notion of branching bisimulation for the alternating model of probabilistic systems is not a congruence with respect to parallel composition. In this paper we first define another branching bisimulation in the more general model allowing consecutive probabilistic transitions, and we prove that

  12. Software for Probabilistic Risk Reduction

    Science.gov (United States)

    Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto

    2004-01-01

    A computer program implements a methodology, denoted probabilistic risk reduction, that is intended to aid in planning the development of complex software and/or hardware systems. This methodology integrates two complementary prior methodologies: (1) that of probabilistic risk assessment and (2) a risk-based planning methodology, implemented in a prior computer program known as Defect Detection and Prevention (DDP), in which multiple requirements and the beneficial effects of risk-mitigation actions are taken into account. The present methodology and the software are able to accommodate both process knowledge (notably of the efficacy of development practices) and product knowledge (notably of the logical structure of a system, the development of which one seeks to plan). Estimates of the costs and benefits of a planned development can be derived. Functional and non-functional aspects of software can be taken into account, and trades made among them. It becomes possible to optimize the planning process in the sense that it becomes possible to select the best suite of process steps and design choices to maximize the expectation of success while remaining within budget.

  13. Probabilistic Reverse dOsimetry Estimating Exposure Distribution (PROcEED)

    Science.gov (United States)

    PROcEED is a web-based application used to conduct probabilistic reverse dosimetry calculations.The tool is used for estimating a distribution of exposure concentrations likely to have produced biomarker concentrations measured in a population.

  14. Probabilistic liquefaction hazard analysis at liquefied sites of 1956 Dunaharaszti earthquake, in Hungary

    Science.gov (United States)

    Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor

    2017-04-01

    Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located

  15. Challenging behavior: Behavioral phenotypes of some genetic syndromes

    Directory of Open Access Journals (Sweden)

    Buha Nataša

    2014-01-01

    Full Text Available Challenging behavior in individuals with mental retardation (MR is relatively frequent, and represents a significant obstacle to adaptive skills. The frequency of specific forms and manifestations of challenging behavior can depend on a variety of personal and environmental factors. There are several prominent theoretical models regarding the etiology of challenging behavior and psychopathology in persons with MR: behavioral, developmental, socio-cultural and biological. The biological model emphasizes the physiological, biochemical and genetic factors as the potential source of challenging behavior. The progress in the field of genetics and neuroscience has opened the opportunity to study and discover the neurobiological basis of phenotypic characteristics. Genetic syndromes associated with MR can be followed by a specific set of problems and disorders which constitutes their behavioral phenotype. The aim of this paper was to present challenging behaviors that manifest in the most frequently studied syndromes: Down syndrome, Fragile X syndrome, Williams syndrome, Prader-Willi syndrome and Angelman syndrome. The concept of behavioral phenotype implies a higher probability of manifesting specific developmental characteristics and specific behaviors in individuals with a certain genetic syndrome. Although the specific set of (possible problems and disorders is distinctive for the described genetic syndromes, the connection between genetics and behavior should be viewed through probabilistic dimension. The probabilistic concept takes into consideration the possibility of intra-syndrome variability in the occurrence, intensity and time onset of behavioral characteristics, at which the higher variability the lower is the specificity of the genetic syndrome. Identifying the specific pattern of behavior can be most important for the process of early diagnosis and prognosis. In addition, having knowledge about behavioral phenotype can be a landmark in

  16. Probabilistic approach to mechanisms

    CERN Document Server

    Sandler, BZ

    1984-01-01

    This book discusses the application of probabilistics to the investigation of mechanical systems. The book shows, for example, how random function theory can be applied directly to the investigation of random processes in the deflection of cam profiles, pitch or gear teeth, pressure in pipes, etc. The author also deals with some other technical applications of probabilistic theory, including, amongst others, those relating to pneumatic and hydraulic mechanisms and roller bearings. Many of the aspects are illustrated by examples of applications of the techniques under discussion.

  17. An efficient randomized algorithm for contact-based NMR backbone resonance assignment.

    Science.gov (United States)

    Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal

    2006-01-15

    Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to

  18. Application of probabilistic risk assessment to advanced liquid metal reactor designs

    International Nuclear Information System (INIS)

    Carroll, W.P.; Temme, M.I.

    1987-01-01

    The United States Department of Energy (US DOE) has been active in the development and application of probabilistic risk assessment methods within its liquid metal breeder reactor development program for the past eleven years. These methods have been applied to comparative risk evaluations, the selection of design features for reactor concepts, the selection and emphasis of research and development programs, and regulatory discussions. The application of probabilistic methods to reactors which are in the conceptual design stage presents unique data base, modeling, and timing challenges, and excellent opportunities to improve the final design. We provide here the background and insights on the experience which the US DOE liquid metal breeder reactor program has had in its application of probabilistic methods to the Clinch River Breeder Reactor Plant project, the Conceptual Design State of the Large Development Plant, and updates on this design. Plans for future applications of probabilistic risk assessment methods are also discussed. The US DOE is embarking on an innovative design program for liquid metal reactors. (author)

  19. Development of a Risk-Based Probabilistic Performance-Assessment Method for Long-Term Cover Systems - 2nd Edition

    International Nuclear Information System (INIS)

    HO, CLIFFORD K.; ARNOLD, BILL W.; COCHRAN, JOHN R.; TAIRA, RANDAL Y.

    2002-01-01

    A probabilistic, risk-based performance-assessment methodology has been developed to assist designers, regulators, and stakeholders in the selection, design, and monitoring of long-term covers for contaminated subsurface sites. This report describes the method, the software tools that were developed, and an example that illustrates the probabilistic performance-assessment method using a repository site in Monticello, Utah. At the Monticello site, a long-term cover system is being used to isolate long-lived uranium mill tailings from the biosphere. Computer models were developed to simulate relevant features, events, and processes that include water flux through the cover, source-term release, vadose-zone transport, saturated-zone transport, gas transport, and exposure pathways. The component models were then integrated into a total-system performance-assessment model, and uncertainty distributions of important input parameters were constructed and sampled in a stochastic Monte Carlo analysis. Multiple realizations were simulated using the integrated model to produce cumulative distribution functions of the performance metrics, which were used to assess cover performance for both present- and long-term future conditions. Performance metrics for this study included the water percolation reaching the uranium mill tailings, radon gas flux at the surface, groundwater concentrations, and dose. Results from uncertainty analyses, sensitivity analyses, and alternative design comparisons are presented for each of the performance metrics. The benefits from this methodology include a quantification of uncertainty, the identification of parameters most important to performance (to prioritize site characterization and monitoring activities), and the ability to compare alternative designs using probabilistic evaluations of performance (for cost savings)

  20. Probabilistic machine learning and artificial intelligence.

    Science.gov (United States)

    Ghahramani, Zoubin

    2015-05-28

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  1. Probabilistic machine learning and artificial intelligence

    Science.gov (United States)

    Ghahramani, Zoubin

    2015-05-01

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  2. Use and Communication of Probabilistic Forecasts.

    Science.gov (United States)

    Raftery, Adrian E

    2016-12-01

    Probabilistic forecasts are becoming more and more available. How should they be used and communicated? What are the obstacles to their use in practice? I review experience with five problems where probabilistic forecasting played an important role. This leads me to identify five types of potential users: Low Stakes Users, who don't need probabilistic forecasts; General Assessors, who need an overall idea of the uncertainty in the forecast; Change Assessors, who need to know if a change is out of line with expectatations; Risk Avoiders, who wish to limit the risk of an adverse outcome; and Decision Theorists, who quantify their loss function and perform the decision-theoretic calculations. This suggests that it is important to interact with users and to consider their goals. The cognitive research tells us that calibration is important for trust in probability forecasts, and that it is important to match the verbal expression with the task. The cognitive load should be minimized, reducing the probabilistic forecast to a single percentile if appropriate. Probabilities of adverse events and percentiles of the predictive distribution of quantities of interest seem often to be the best way to summarize probabilistic forecasts. Formal decision theory has an important role, but in a limited range of applications.

  3. Use and Communication of Probabilistic Forecasts

    Science.gov (United States)

    Raftery, Adrian E.

    2015-01-01

    Probabilistic forecasts are becoming more and more available. How should they be used and communicated? What are the obstacles to their use in practice? I review experience with five problems where probabilistic forecasting played an important role. This leads me to identify five types of potential users: Low Stakes Users, who don’t need probabilistic forecasts; General Assessors, who need an overall idea of the uncertainty in the forecast; Change Assessors, who need to know if a change is out of line with expectatations; Risk Avoiders, who wish to limit the risk of an adverse outcome; and Decision Theorists, who quantify their loss function and perform the decision-theoretic calculations. This suggests that it is important to interact with users and to consider their goals. The cognitive research tells us that calibration is important for trust in probability forecasts, and that it is important to match the verbal expression with the task. The cognitive load should be minimized, reducing the probabilistic forecast to a single percentile if appropriate. Probabilities of adverse events and percentiles of the predictive distribution of quantities of interest seem often to be the best way to summarize probabilistic forecasts. Formal decision theory has an important role, but in a limited range of applications. PMID:28446941

  4. Probabilistic numerics and uncertainty in computations.

    Science.gov (United States)

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  5. Artificial intelligence applied to assigned merchandise location in retail sales systems

    Directory of Open Access Journals (Sweden)

    Cruz-Domínguez, O.

    2016-05-01

    Full Text Available This paper presents an option for improving the process of assigning storage locations for merchandise in a warehouse. A disadvantage of policies in the literature is that the merchandise is assigned allocation only according to the volume of sales and the rotation it presents. However, in some cases it is necessary to deal with other aspects such as family group membership, the physical characteristics of the products, and their sales pattern to design an integral policy. This paper presents an alternative to the afore- mentioned process using Flexsim®, artificial neural networks, and genetic algorithms.

  6. Probabilistic fatigue life prediction methodology for notched components based on simple smooth fatigue tests

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Z. R.; Li, Z. X. [Dept.of Engineering Mechanics, Jiangsu Key Laboratory of Engineering Mechanics, Southeast University, Nanjing (China); Hu, X. T.; Xin, P. P.; Song, Y. D. [State Key Laboratory of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing (China)

    2017-01-15

    The methodology of probabilistic fatigue life prediction for notched components based on smooth specimens is presented. Weakestlink theory incorporating Walker strain model has been utilized in this approach. The effects of stress ratio and stress gradient have been considered. Weibull distribution and median rank estimator are used to describe fatigue statistics. Fatigue tests under different stress ratios were conducted on smooth and notched specimens of titanium alloy TC-1-1. The proposed procedures were checked against the test data of TC-1-1 notched specimens. Prediction results of 50 % survival rate are all within a factor of two scatter band of the test results.

  7. Probabilistic linguistics

    NARCIS (Netherlands)

    Bod, R.; Heine, B.; Narrog, H.

    2010-01-01

    Probabilistic linguistics takes all linguistic evidence as positive evidence and lets statistics decide. It allows for accurate modelling of gradient phenomena in production and perception, and suggests that rule-like behaviour is no more than a side effect of maximizing probability. This chapter

  8. Probabilistic soft sets and dual probabilistic soft sets in decision making with positive and negative parameters

    Science.gov (United States)

    Fatimah, F.; Rosadi, D.; Hakim, R. B. F.

    2018-03-01

    In this paper, we motivate and introduce probabilistic soft sets and dual probabilistic soft sets for handling decision making problem in the presence of positive and negative parameters. We propose several types of algorithms related to this problem. Our procedures are flexible and adaptable. An example on real data is also given.

  9. A probabilistic maintenance model for diesel engines

    Science.gov (United States)

    Pathirana, Shan; Abeygunawardane, Saranga Kumudu

    2018-02-01

    In this paper, a probabilistic maintenance model is developed for inspection based preventive maintenance of diesel engines based on the practical model concepts discussed in the literature. Developed model is solved using real data obtained from inspection and maintenance histories of diesel engines and experts' views. Reliability indices and costs were calculated for the present maintenance policy of diesel engines. A sensitivity analysis is conducted to observe the effect of inspection based preventive maintenance on the life cycle cost of diesel engines.

  10. Why do probabilistic finite element analysis ?

    CERN Document Server

    Thacker, Ben H

    2008-01-01

    The intention of this book is to provide an introduction to performing probabilistic finite element analysis. As a short guideline, the objective is to inform the reader of the use, benefits and issues associated with performing probabilistic finite element analysis without excessive theory or mathematical detail.

  11. Error Discounting in Probabilistic Category Learning

    Science.gov (United States)

    Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.

    2011-01-01

    The assumption in some current theories of probabilistic categorization is that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report 2 probabilistic-categorization experiments in which we investigated error…

  12. Probabilistic programming in Python using PyMC3

    Directory of Open Access Journals (Sweden)

    John Salvatier

    2016-04-01

    Full Text Available Probabilistic programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC sampling allow inference on increasingly complex models. This class of MCMC, known as Hamiltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source probabilistic programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other probabilistic programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package.

  13. Exact and approximate probabilistic symbolic execution for nondeterministic programs

    DEFF Research Database (Denmark)

    Luckow, Kasper Søe; Păsăreanu, Corina S.; Dwyer, Matthew B.

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probab...... Java programs. We show that our algorithms significantly improve upon a state-of-the-art statistical model checking algorithm, originally developed for Markov Decision Processes....... probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also...

  14. Probabilistic design of fibre concrete structures

    Science.gov (United States)

    Pukl, R.; Novák, D.; Sajdlová, T.; Lehký, D.; Červenka, J.; Červenka, V.

    2017-09-01

    Advanced computer simulation is recently well-established methodology for evaluation of resistance of concrete engineering structures. The nonlinear finite element analysis enables to realistically predict structural damage, peak load, failure, post-peak response, development of cracks in concrete, yielding of reinforcement, concrete crushing or shear failure. The nonlinear material models can cover various types of concrete and reinforced concrete: ordinary concrete, plain or reinforced, without or with prestressing, fibre concrete, (ultra) high performance concrete, lightweight concrete, etc. Advanced material models taking into account fibre concrete properties such as shape of tensile softening branch, high toughness and ductility are described in the paper. Since the variability of the fibre concrete material properties is rather high, the probabilistic analysis seems to be the most appropriate format for structural design and evaluation of structural performance, reliability and safety. The presented combination of the nonlinear analysis with advanced probabilistic methods allows evaluation of structural safety characterized by failure probability or by reliability index respectively. Authors offer a methodology and computer tools for realistic safety assessment of concrete structures; the utilized approach is based on randomization of the nonlinear finite element analysis of the structural model. Uncertainty of the material properties or their randomness obtained from material tests are accounted in the random distribution. Furthermore, degradation of the reinforced concrete materials such as carbonation of concrete, corrosion of reinforcement, etc. can be accounted in order to analyze life-cycle structural performance and to enable prediction of the structural reliability and safety in time development. The results can serve as a rational basis for design of fibre concrete engineering structures based on advanced nonlinear computer analysis. The presented

  15. Probabilistic atlas-based segmentation of combined T1-weighted and DUTE MRI for calculation of head attenuation maps in integrated PET/MRI scanners.

    Science.gov (United States)

    Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian

    2014-01-01

    We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.

  16. Metabolic level recognition of progesterone in dairy Holstein cows using probabilistic models

    Directory of Open Access Journals (Sweden)

    Ludmila N. Turino

    2014-05-01

    Full Text Available Administration of exogenous progesterone is widely used in hormonal protocols for estrous (resynchronization of dairy cattle without regarding pharmacological issues for dose calculation. This happens because it is difficult to estimate the metabolic level of progesterone for each individual cow before administration. In the present contribution, progesterone pharmacokinetics has been determined in lactating Holstein cows with different milk production yields. A Bayesian approach has been implemented to build two probabilistic progesterone pharmacokinetic models for high and low yield dairy cows. Such models are based on a one-compartment Hill structure. Posterior probabilistic models have been structurally set up and parametric probability density functions have been empirically estimated. Moreover, a global sensitivity analysis has been done to know sensitivity profile of each model. Finally, posterior probabilistic models have adequately recognized cow’s progesterone metabolic level in a validation set when Kullback-Leibler based indices were used. These results suggest that milk yield may be a good index for estimating pharmacokinetic level of progesterone.

  17. A hybrid path-oriented code assignment CDMA-based MAC protocol for underwater acoustic sensor networks.

    Science.gov (United States)

    Chen, Huifang; Fan, Guangyu; Xie, Lei; Cui, Jun-Hong

    2013-11-04

    Due to the characteristics of underwater acoustic channel, media access control (MAC) protocols designed for underwater acoustic sensor networks (UWASNs) are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA) CDMA MAC (POCA-CDMA-MAC), is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA) or receiver-oriented code assignment (ROCA). Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.

  18. A Hybrid Path-Oriented Code Assignment CDMA-Based MAC Protocol for Underwater Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Huifang Chen

    2013-11-01

    Full Text Available Due to the characteristics of underwater acoustic channel, media access control (MAC protocols designed for underwater acoustic sensor networks (UWASNs are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA CDMA MAC (POCA-CDMA-MAC, is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA or receiver-oriented code assignment (ROCA. Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.

  19. Dynamic Fault Diagnosis for Nuclear Installation Using Probabilistic Approach

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho; Deswandri; Ahmad Abtokhi; Darlis

    2003-01-01

    Probabilistic based fault diagnosis which represent the relationship between cause and consequence of the events for trouble shooting is developed in this research based on Bayesian Networks. Contribution of on-line data comes from sensors and system/component reliability in node cause is expected increasing the belief level of Bayesian Networks. (author)

  20. Global/local methods for probabilistic structural analysis

    Science.gov (United States)

    Millwater, H. R.; Wu, Y.-T.

    1993-04-01

    A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.

  1. Biasing transition rate method based on direct MC simulation for probabilistic safety assessment

    Institute of Scientific and Technical Information of China (English)

    Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang

    2017-01-01

    Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.

  2. A search for symmetries in the genetic code

    International Nuclear Information System (INIS)

    Hornos, J.E.M.; Hornos, Y.M.M.

    1991-01-01

    A search for symmetries based on the classification theorem of Cartan for the compact simple Lie algebras is performed to verify to what extent the genetic code is a manifestation of some underlying symmetry. An exact continuous symmetry group cannot be found to reproduce the present, universal code. However a unique approximate symmetry group is compatible with codon assignment for the fundamental amino acids and the termination codon. In order to obtain the actual genetic code, the symmetry must be slightly broken. (author). 27 refs, 3 figs, 6 tabs

  3. A probabilistic model of RNA conformational space

    DEFF Research Database (Denmark)

    Frellsen, Jes; Moltke, Ida; Thiim, Martin

    2009-01-01

    , the discrete nature of the fragments necessitates the use of carefully tuned, unphysical energy functions, and their non-probabilistic nature impairs unbiased sampling. We offer a solution to the sampling problem that removes these important limitations: a probabilistic model of RNA structure that allows...... conformations for 9 out of 10 test structures, solely using coarse-grained base-pairing information. In conclusion, the method provides a theoretical and practical solution for a major bottleneck on the way to routine prediction and simulation of RNA structure and dynamics in atomic detail.......The increasing importance of non-coding RNA in biology and medicine has led to a growing interest in the problem of RNA 3-D structure prediction. As is the case for proteins, RNA 3-D structure prediction methods require two key ingredients: an accurate energy function and a conformational sampling...

  4. Probabilistic Modeling of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei

    Wind energy is one of several energy sources in the world and a rapidly growing industry in the energy sector. When placed in offshore or onshore locations, wind turbines are exposed to wave excitations, highly dynamic wind loads and/or the wakes from other wind turbines. Therefore, most components...... in a wind turbine experience highly dynamic and time-varying loads. These components may fail due to wear or fatigue, and this can lead to unplanned shutdown repairs that are very costly. The design by deterministic methods using safety factors is generally unable to account for the many uncertainties. Thus......, a reliability assessment should be based on probabilistic methods where stochastic modeling of failures is performed. This thesis focuses on probabilistic models and the stochastic modeling of the fatigue life of the wind turbine drivetrain. Hence, two approaches are considered for stochastic modeling...

  5. Quantification of brain images using Korean standard templates and structural and cytoarchitectonic probabilistic maps

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Lee, Dong Soo; Kim, Yu Kyeong

    2004-01-01

    Population based structural and functional maps of the brain provide effective tools for the analysis and interpretation of complex and individually variable brain data. Brain MRI and PET standard templates and statistical probabilistic maps based on image data of Korean normal volunteers have been developed and probabilistic maps based on cytoarchitectonic data have been introduced. A quantification method using these data was developed for the objective assessment of regional intensity in the brain images. Age, gender and ethnic specific anatomical and functional brain templates based on MR and PET images of Korean normal volunteers were developed. Korean structural probabilistic maps for 89 brain regions and cytoarchitectonic probabilistic maps for 13 Brodmann areas were transformed onto the standard templates. Brain FDG PET and SPGR MR images of normal volunteers were spatially normalized onto the template of each modality and gender. Regional uptake of radiotracers in PET and gray matter concentration in MR images were then quantified by averaging (or summing) regional intensities weighted using the probabilistic maps of brain regions. Regionally specific effects of aging on glucose metabolism in cingulate cortex were also examined. Quantification program could generate quantification results for single spatially normalized images per 20 seconds. Glucose metabolism change in cingulate gyrus was regionally specific: ratios of glucose metabolism in the rostral anterior cingulate vs. posterior cingulate and the caudal anterior cingulate vs. posterior cingulate were significantly decreased as the age increased. 'Rostral anterior' / 'posterior' was decreased by 3.1% per decade of age (p -11 , r=0.81) and 'caudal anterior' / 'posterior' was decreased by 1.7% (p -8 , r=0.72). Ethnic specific standard templates and probabilistic maps and quantification program developed in this study will be useful for the analysis of brain image of Korean people since the difference

  6. Quantification of brain images using Korean standard templates and structural and cytoarchitectonic probabilistic maps

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Sung; Lee, Dong Soo; Kim, Yu Kyeong [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)] [and others

    2004-06-01

    Population based structural and functional maps of the brain provide effective tools for the analysis and interpretation of complex and individually variable brain data. Brain MRI and PET standard templates and statistical probabilistic maps based on image data of Korean normal volunteers have been developed and probabilistic maps based on cytoarchitectonic data have been introduced. A quantification method using these data was developed for the objective assessment of regional intensity in the brain images. Age, gender and ethnic specific anatomical and functional brain templates based on MR and PET images of Korean normal volunteers were developed. Korean structural probabilistic maps for 89 brain regions and cytoarchitectonic probabilistic maps for 13 Brodmann areas were transformed onto the standard templates. Brain FDG PET and SPGR MR images of normal volunteers were spatially normalized onto the template of each modality and gender. Regional uptake of radiotracers in PET and gray matter concentration in MR images were then quantified by averaging (or summing) regional intensities weighted using the probabilistic maps of brain regions. Regionally specific effects of aging on glucose metabolism in cingulate cortex were also examined. Quantification program could generate quantification results for single spatially normalized images per 20 seconds. Glucose metabolism change in cingulate gyrus was regionally specific: ratios of glucose metabolism in the rostral anterior cingulate vs. posterior cingulate and the caudal anterior cingulate vs. posterior cingulate were significantly decreased as the age increased. 'Rostral anterior' / 'posterior' was decreased by 3.1% per decade of age (p<10{sup -11}, r=0.81) and 'caudal anterior' / 'posterior' was decreased by 1.7% (p<10{sup -8}, r=0.72). Ethnic specific standard templates and probabilistic maps and quantification program developed in this study will be useful for the analysis

  7. Comparative Probabilistic Assessment of Occupational Pesticide Exposures Based on Regulatory Assessments

    Science.gov (United States)

    Pouzou, Jane G.; Cullen, Alison C.; Yost, Michael G.; Kissel, John C.; Fenske, Richard A.

    2018-01-01

    Implementation of probabilistic analyses in exposure assessment can provide valuable insight into the risks of those at the extremes of population distributions, including more vulnerable or sensitive subgroups. Incorporation of these analyses into current regulatory methods for occupational pesticide exposure is enabled by the exposure data sets and associated data currently used in the risk assessment approach of the Environmental Protection Agency (EPA). Monte Carlo simulations were performed on exposure measurements from the Agricultural Handler Exposure Database and the Pesticide Handler Exposure Database along with data from the Exposure Factors Handbook and other sources to calculate exposure rates for three different neurotoxic compounds (azinphos methyl, acetamiprid, emamectin benzoate) across four pesticide-handling scenarios. Probabilistic estimates of doses were compared with the no observable effect levels used in the EPA occupational risk assessments. Some percentage of workers were predicted to exceed the level of concern for all three compounds: 54% for azinphos methyl, 5% for acetamiprid, and 20% for emamectin benzoate. This finding has implications for pesticide risk assessment and offers an alternative procedure that may be more protective of those at the extremes of exposure than the current approach. PMID:29105804

  8. Comparative Probabilistic Assessment of Occupational Pesticide Exposures Based on Regulatory Assessments.

    Science.gov (United States)

    Pouzou, Jane G; Cullen, Alison C; Yost, Michael G; Kissel, John C; Fenske, Richard A

    2017-11-06

    Implementation of probabilistic analyses in exposure assessment can provide valuable insight into the risks of those at the extremes of population distributions, including more vulnerable or sensitive subgroups. Incorporation of these analyses into current regulatory methods for occupational pesticide exposure is enabled by the exposure data sets and associated data currently used in the risk assessment approach of the Environmental Protection Agency (EPA). Monte Carlo simulations were performed on exposure measurements from the Agricultural Handler Exposure Database and the Pesticide Handler Exposure Database along with data from the Exposure Factors Handbook and other sources to calculate exposure rates for three different neurotoxic compounds (azinphos methyl, acetamiprid, emamectin benzoate) across four pesticide-handling scenarios. Probabilistic estimates of doses were compared with the no observable effect levels used in the EPA occupational risk assessments. Some percentage of workers were predicted to exceed the level of concern for all three compounds: 54% for azinphos methyl, 5% for acetamiprid, and 20% for emamectin benzoate. This finding has implications for pesticide risk assessment and offers an alternative procedure that may be more protective of those at the extremes of exposure than the current approach. © 2017 Society for Risk Analysis.

  9. Probabilistic Simulation of Multi-Scale Composite Behavior

    Science.gov (United States)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  10. Probabilistic Cue Combination: Less Is More

    Science.gov (United States)

    Yurovsky, Daniel; Boyer, Ty W.; Smith, Linda B.; Yu, Chen

    2013-01-01

    Learning about the structure of the world requires learning probabilistic relationships: rules in which cues do not predict outcomes with certainty. However, in some cases, the ability to track probabilistic relationships is a handicap, leading adults to perform non-normatively in prediction tasks. For example, in the "dilution effect,"…

  11. Detecting Plagiarism in MS Access Assignments

    Science.gov (United States)

    Singh, Anil

    2013-01-01

    Assurance of individual effort from students in computer-based assignments is a challenge. Due to digitization, students can easily use a copy of their friend's work and submit it as their own. Plagiarism in assignments puts students who cheat at par with those who work honestly and this compromises the learning evaluation process. Using a…

  12. A new repair criterion for steam generator tubes with axial cracks based on probabilistic integrity assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun-Su; Oh, Chang-Kyun [KEPCO Engineering and Construction Company, Inc., 269, Hyeoksin-ro, Gimcheon, Gyeongsangbuk-do 39660 (Korea, Republic of); Chang, Yoon-Suk, E-mail: yschang@khu.ac.kr [Department of Nuclear Engineering, College of Engineering, Kyung Hee University, 1732 Deokyoungdaero, Giheung, Yongin, Gyeonggi 446-701 (Korea, Republic of)

    2017-03-15

    Highlights: • Probabilistic assessment was performed for axially cracked steam generator tubes. • The threshold crack sizes were determined based on burst pressures of the tubes. • A new repair criterion was suggested as a function of operation time. - Abstract: Steam generator is one of the major components in a nuclear power plant, and it consists of thousands of thin-walled tubes. The operating record of the steam generators has indicated that a number of axial cracks due to stress corrosion have been frequently detected in the tubes. Since the tubes are closely related to the safety and also the efficiency of a nuclear power plant, an establishment of the appropriate repair criterion for the defected tubes and its applications are necessary. The objective of this paper is to develop an accurate repair criterion for the tubes with axial cracks. To do this, a thorough review is performed on the key parameters affecting the tube integrity, and then the probabilistic integrity assessment is carried out by considering the various uncertainties. In addition, the sizes of critical crack are determined by comparing the burst pressure of the cracked tube with the required performance criterion. Based on this result, the new repair criterion for the axially cracked tubes is defined from the reasonably conservative value such that the required performance criterion in terms of the burst pressure is able to be met during the next operating period.

  13. Probabilistic modeling of children's handwriting

    Science.gov (United States)

    Puri, Mukta; Srihari, Sargur N.; Hanson, Lisa

    2013-12-01

    There is little work done in the analysis of children's handwriting, which can be useful in developing automatic evaluation systems and in quantifying handwriting individuality. We consider the statistical analysis of children's handwriting in early grades. Samples of handwriting of children in Grades 2-4 who were taught the Zaner-Bloser style were considered. The commonly occurring word "and" written in cursive style as well as hand-print were extracted from extended writing. The samples were assigned feature values by human examiners using a truthing tool. The human examiners looked at how the children constructed letter formations in their writing, looking for similarities and differences from the instructions taught in the handwriting copy book. These similarities and differences were measured using a feature space distance measure. Results indicate that the handwriting develops towards more conformity with the class characteristics of the Zaner-Bloser copybook which, with practice, is the expected result. Bayesian networks were learnt from the data to enable answering various probabilistic queries, such as determining students who may continue to produce letter formations as taught during lessons in school and determining the students who will develop a different and/or variation of the those letter formations and the number of different types of letter formations.

  14. WebAssign: Assessing Your Students' Understanding Continuously

    Science.gov (United States)

    Risley, John S.

    1999-11-01

    Motivating students to learn is a constant challenge for faculty. Technology can play a significant role. One such solution is WebAssign — a web-based homework system that offers new teaching and learning opportunities for educators and their students. WebAssign delivers, collects, grades, and records customized homework assignments over the Internet. Students get immediate feedback with credit and instructors can implement "Just-in-Time" teaching. In this talk, I will describe how assignments can be generated with different numerical values for each question, giving each student a unique problem to solve. This feature encourages independent thinking with the benefit of collaborative learning. Example assignments taken from textbook questions and intellectually engaging Java applet simulations will be shown. Studies and first-hand experience on the educational impact of using WebAssign will also be discussed.

  15. Basic design of parallel computational program for probabilistic structural analysis

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Arai, Taketoshi; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for 'development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory' (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  16. Basic design of parallel computational program for probabilistic structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Yoshiyuki; Arai, Taketoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for `development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory` (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  17. Probabilistic escalation modelling

    Energy Technology Data Exchange (ETDEWEB)

    Korneliussen, G.; Eknes, M.L.; Haugen, K.; Selmer-Olsen, S. [Det Norske Veritas, Oslo (Norway)

    1997-12-31

    This paper describes how structural reliability methods may successfully be applied within quantitative risk assessment (QRA) as an alternative to traditional event tree analysis. The emphasis is on fire escalation in hydrocarbon production and processing facilities. This choice was made due to potential improvements over current QRA practice associated with both the probabilistic approach and more detailed modelling of the dynamics of escalating events. The physical phenomena important for the events of interest are explicitly modelled as functions of time. Uncertainties are represented through probability distributions. The uncertainty modelling enables the analysis to be simple when possible and detailed when necessary. The methodology features several advantages compared with traditional risk calculations based on event trees. (Author)

  18. Probabilistic Electricity Price Forecasting Models by Aggregation of Competitive Predictors

    Directory of Open Access Journals (Sweden)

    Claudio Monteiro

    2018-04-01

    Full Text Available This article presents original probabilistic price forecasting meta-models (PPFMCP models, by aggregation of competitive predictors, for day-ahead hourly probabilistic price forecasting. The best twenty predictors of the EEM2016 EPF competition are used to create ensembles of hourly spot price forecasts. For each hour, the parameter values of the probability density function (PDF of a Beta distribution for the output variable (hourly price can be directly obtained from the expected and variance values associated to the ensemble for such hour, using three aggregation strategies of predictor forecasts corresponding to three PPFMCP models. A Reliability Indicator (RI and a Loss function Indicator (LI are also introduced to give a measure of uncertainty of probabilistic price forecasts. The three PPFMCP models were satisfactorily applied to the real-world case study of the Iberian Electricity Market (MIBEL. Results from PPFMCP models showed that PPFMCP model 2, which uses aggregation by weight values according to daily ranks of predictors, was the best probabilistic meta-model from a point of view of mean absolute errors, as well as of RI and LI. PPFMCP model 1, which uses the averaging of predictor forecasts, was the second best meta-model. PPFMCP models allow evaluations of risk decisions based on the price to be made.

  19. WDM Optical Access Network for Full-Duplex and Reconfigurable Capacity Assignment Based on PolMUX Technique

    Directory of Open Access Journals (Sweden)

    Jose Mora

    2014-12-01

    Full Text Available We present a novel bidirectional WDM-based optical access network featuring reconfigurable capacity assignment. The architecture relies on the PolMUX technique allowing a compact, flexible, and bandwidth-efficient router in addition to source-free ONUs and color-less ONUs for cost/complexity minimization. Moreover, the centralized architecture contemplates remote management and control of polarization. High-quality transmission of digital signals is demonstrated through different routing scenarios where all channels are dynamically assigned in both downlink and uplink directions.

  20. Probabilistic estimation of residential air exchange rates for ...

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  1. Probabilistic Design

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Burcharth, H. F.

    This chapter describes how partial safety factors can be used in design of vertical wall breakwaters and an example of a code format is presented. The partial safety factors are calibrated on a probabilistic basis. The code calibration process used to calibrate some of the partial safety factors...

  2. Automation of block assignment planning using a diagram-based scenario modeling method

    Directory of Open Access Journals (Sweden)

    In Hyuck Hwang

    2014-03-01

    Full Text Available Most shipbuilding scheduling research so far has focused on the load level on the dock plan. This is because the dock is the least extendable resource in shipyards, and its overloading is difficult to resolve. However, once dock scheduling is completed, making a plan that makes the best use of the rest of the resources in the shipyard to minimize any additional cost is also important. Block assignment planning is one of the midterm planning tasks; it assigns a block to the facility (factory/shop or surface plate that will actually manufacture the block according to the block characteristics and current situation of the facility. It is one of the most heavily loaded midterm planning tasks and is carried out manually by experienced workers. In this study, a method of representing the block assignment rules using a diagram was suggested through analysis of the existing manual process. A block allocation program was developed which automated the block assignment process according to the rules represented by the diagram. The planning scenario was validated through a case study that compared the manual assignment and two automated block assignment results.

  3. Automation of block assignment planning using a diagram-based scenario modeling method

    Directory of Open Access Journals (Sweden)

    Hwang In Hyuck

    2014-03-01

    Full Text Available Most shipbuilding scheduling research so far has focused on the load level on the dock plan. This is be¬cause the dock is the least extendable resource in shipyards, and its overloading is difficult to resolve. However, once dock scheduling is completed, making a plan that makes the best use of the rest of the resources in the shipyard to minimize any additional cost is also important. Block assignment planning is one of the midterm planning tasks; it assigns a block to the facility (factory/shop or surface plate that will actually manufacture the block according to the block characteristics and current situation of the facility. It is one of the most heavily loaded midterm planning tasks and is carried out manu¬ally by experienced workers. In this study, a method of representing the block assignment rules using a diagram was su¬ggested through analysis of the existing manual process. A block allocation program was developed which automated the block assignment process according to the rules represented by the diagram. The planning scenario was validated through a case study that compared the manual assignment and two automated block assignment results.

  4. Probabilistic Learning by Rodent Grid Cells.

    Science.gov (United States)

    Cheung, Allen

    2016-10-01

    Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population

  5. Probabilistic generation assessment system of renewable energy in Korea

    Directory of Open Access Journals (Sweden)

    Yeonchan Lee

    2016-01-01

    Full Text Available This paper proposes probabilistic generation assessment system introduction of renewable energy generators. This paper is focused on wind turbine generator and solar cell generator. The proposed method uses an assessment model based on probabilistic model considering uncertainty of resources (wind speed and solar radiation. Equivalent generation function of the wind and solar farms are evaluated. The equivalent generation curves of wind farms and solar farms are assessed using regression analysis method using typical least square method from last actual generation data for wind farms. The proposed model is applied to Korea Renewable Generation System of 8 grouped 41 wind farms and 9 grouped around 600 solar farms in South Korea.

  6. Development of Probabilistic Structural Analysis Integrated with Manufacturing Processes

    Science.gov (United States)

    Pai, Shantaram S.; Nagpal, Vinod K.

    2007-01-01

    An effort has been initiated to integrate manufacturing process simulations with probabilistic structural analyses in order to capture the important impacts of manufacturing uncertainties on component stress levels and life. Two physics-based manufacturing process models (one for powdered metal forging and the other for annular deformation resistance welding) have been linked to the NESSUS structural analysis code. This paper describes the methodology developed to perform this integration including several examples. Although this effort is still underway, particularly for full integration of a probabilistic analysis, the progress to date has been encouraging and a software interface that implements the methodology has been developed. The purpose of this paper is to report this preliminary development.

  7. A heuristics-based solution to the continuous berth allocation and crane assignment problem

    Directory of Open Access Journals (Sweden)

    Mohammad Hamdy Elwany

    2013-12-01

    Full Text Available Effective utilization plans for various resources at a container terminal are essential to reducing the turnaround time of cargo vessels. Among the scarcest resources are the berth and its associated cranes. Thus, two important optimization problems arise, which are the berth allocation and quay crane assignment problems. The berth allocation problem deals with the generation of a berth plan, which determines where and when a ship has to berth alongside the quay. The quay crane assignment problem addresses the problem of determining how many and which quay crane(s will serve each vessel. In this paper, an integrated heuristics-based solution methodology is proposed that tackles both problems simultaneously. The preliminary experimental results show that the proposed approach yields high quality solutions to such an NP-hard problem in a reasonable computational time suggesting its suitability for practical use.

  8. Plasticity in probabilistic reaction norms for maturation in a salmonid fish.

    Science.gov (United States)

    Morita, Kentaro; Tsuboi, Jun-ichi; Nagasawa, Toru

    2009-10-23

    The relationship between body size and the probability of maturing, often referred to as the probabilistic maturation reaction norm (PMRN), has been increasingly used to infer genetic variation in maturation schedule. Despite this trend, few studies have directly evaluated plasticity in the PMRN. A transplant experiment using white-spotted charr demonstrated that the PMRN for precocious males exhibited plasticity. A smaller threshold size at maturity occurred in charr inhabiting narrow streams where more refuges are probably available for small charr, which in turn might enhance the reproductive success of sneaker precocious males. Our findings suggested that plastic effects should clearly be included in investigations of variation in PMRNs.

  9. Mining Staff Assignment Rules from Event-Based Data

    NARCIS (Netherlands)

    Ly, Linh Thao; Rinderle, Stefanie; Dadam, Peter; Reichert, Manfred; Bussler, Christoph J.; Haller, Armin

    2006-01-01

    Process mining offers methods and techniques for capturing process behaviour from log data of past process executions. Although many promising approaches on mining the control flow have been published, no attempt has been made to mine the staff assignment situation of business processes. In this

  10. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    Science.gov (United States)

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  11. Optimization-Based Approaches to Control of Probabilistic Boolean Networks

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2017-02-01

    Full Text Available Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs, which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.

  12. Probabilistic Reversible Automata and Quantum Automata

    OpenAIRE

    Golovkins, Marats; Kravtsev, Maksim

    2002-01-01

    To study relationship between quantum finite automata and probabilistic finite automata, we introduce a notion of probabilistic reversible automata (PRA, or doubly stochastic automata). We find that there is a strong relationship between different possible models of PRA and corresponding models of quantum finite automata. We also propose a classification of reversible finite 1-way automata.

  13. The Stag Hunt Game: An Example of an Excel-Based Probabilistic Game

    Science.gov (United States)

    Bridge, Dave

    2016-01-01

    With so many role-playing simulations already in the political science education literature, the recent repeated calls for new games is both timely and appropriate. This article answers and extends those calls by advocating the creation of probabilistic games using Microsoft Excel. I introduce the example of the Stag Hunt Game--a short, effective,…

  14. Probabilistic approach to manipulator kinematics and dynamics

    International Nuclear Information System (INIS)

    Rao, S.S.; Bhatti, P.K.

    2001-01-01

    A high performance, high speed robotic arm must be able to manipulate objects with a high degree of accuracy and repeatability. As with any other physical system, there are a number of factors causing uncertainties in the behavior of a robotic manipulator. These factors include manufacturing and assembling tolerances, and errors in the joint actuators and controllers. In order to study the effect of these uncertainties on the robotic end-effector and to obtain a better insight into the manipulator behavior, the manipulator kinematics and dynamics are modeled using a probabilistic approach. Based on the probabilistic model, kinematic and dynamic performance criteria are defined to provide measures of the behavior of the robotic end-effector. Techniques are presented to compute the kinematic and dynamic reliabilities of the manipulator. The effects of tolerances associated with the various manipulator parameters on the reliabilities are studied. Numerical examples are presented to illustrate the procedures

  15. Incorporating breeding abundance into spatial assignments on continuous surfaces.

    Science.gov (United States)

    Rushing, Clark S; Marra, Peter P; Studds, Colin E

    2017-06-01

    Determining the geographic connections between breeding and nonbreeding populations, termed migratory connectivity, is critical to advancing our understanding of the ecology and conservation of migratory species. Assignment models based on stable isotopes historically have been an important tool for studying migratory connectivity of small-bodied species, but the low resolution of these assignments has generated interest into combining isotopes with other sources in information. Abundance is one of the most appealing data sources to include in isotope-based assignments, but there are currently no statistical methods or guidelines for optimizing the contribution of stable isotopes and abundance for inferring migratory connectivity. Using known-origin stable-hydrogen isotope samples of six Neotropical migratory bird species, we rigorously assessed the performance of assignment models that differentially weight the contribution of the isotope and abundance data. For two species with adequate sample sizes, we used Pareto optimality to determine the set of models that simultaneously minimized both assignment error rate and assignment area. We then assessed the ability of the top models from these two species to improve assignments of the remaining four species compared to assignments based on isotopes alone. We show that the increased precision of models that include abundance is often offset by a large increase in assignment error. However, models that optimally weigh the abundance data relative to the isotope data can result in higher precision and, in some cases, lower error than models based on isotopes alone. The top models, however, depended on the distribution of relative breeding abundance, with patchier distributions requiring stronger downweighting of abundance, and we present general guidelines for future studies. These results confirm that breeding abundance can be an important source of information for studies investigating broad-scale movements of

  16. On the Probabilistic Characterization of Robustness and Resilience

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Qin, J.; Miraglia, Simona

    2017-01-01

    Over the last decade significant research efforts have been devoted to the probabilistic modeling and analysis of system characteristics. Especially performance characteristics of systems subjected to random disturbances, such as robustness and resilience have been in the focus of these efforts...... in the modeling of robustness and resilience in the research areas of natural disaster risk management, socio-ecological systems and social systems and we propose a generic decision analysis framework for the modeling and analysis of systems across application areas. The proposed framework extends the concept...... of direct and indirect consequences and associated risks in probabilistic systems modeling formulated by the Joint Committee on Structural Safety (JCSS) to facilitate the modeling and analysis of resilience in addition to robustness and vulnerability. Moreover, based on recent insights in the modeling...

  17. On the Determinations of Class-Based Storage Assignments in AS/RS having two I/O Locations

    NARCIS (Netherlands)

    Ashayeri, J.; Heuts, R.M.J.; Beekhof, M.; Wilhelm, M.R.

    2001-01-01

    This paper presents the use and extension of a geometrical-based algorithmic approach for determining the expected S/R machine cycle times, and therefore warehouse throughput, for class-based storage assignment layouts in an AS/RS.The approach was designed for the purpose of solving a practical

  18. Probabilistic uniformities of uniform spaces

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Lopez, J.; Romaguera, S.; Sanchis, M.

    2017-07-01

    The theory of metric spaces in the fuzzy context has shown to be an interesting area of study not only from a theoretical point of view but also for its applications. Nevertheless, it is usual to consider these spaces as classical topological or uniform spaces and there are not too many results about constructing fuzzy topological structures starting from a fuzzy metric. Maybe, H/{sup o}hle was the first to show how to construct a probabilistic uniformity and a Lowen uniformity from a probabilistic pseudometric /cite{Hohle78,Hohle82a}. His method can be directly translated to the context of fuzzy metrics and allows to characterize the categories of probabilistic uniform spaces or Lowen uniform spaces by means of certain families of fuzzy pseudometrics /cite{RL}. On the other hand, other different fuzzy uniformities can be constructed in a fuzzy metric space: a Hutton $[0,1]$-quasi-uniformity /cite{GGPV06}; a fuzzifiying uniformity /cite{YueShi10}, etc. The paper /cite{GGRLRo} gives a study of several methods of endowing a fuzzy pseudometric space with a probabilistic uniformity and a Hutton $[0,1]$-quasi-uniformity. In 2010, J. Guti/'errez Garc/'{/i}a, S. Romaguera and M. Sanchis /cite{GGRoSanchis10} proved that the category of uniform spaces is isomorphic to a category formed by sets endowed with a fuzzy uniform structure, i. e. a family of fuzzy pseudometrics satisfying certain conditions. We will show here that, by means of this isomorphism, we can obtain several methods to endow a uniform space with a probabilistic uniformity. Furthermore, these constructions allow to obtain a factorization of some functors introduced in /cite{GGRoSanchis10}. (Author)

  19. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    Science.gov (United States)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  20. Probabilistic risk assessment of gold nanoparticles after intravenous administration by integrating in vitro and in vivo toxicity with physiologically based pharmacokinetic modeling.

    Science.gov (United States)

    Cheng, Yi-Hsien; Riviere, Jim E; Monteiro-Riviere, Nancy A; Lin, Zhoumeng

    2018-04-14

    This study aimed to conduct an integrated and probabilistic risk assessment of gold nanoparticles (AuNPs) based on recently published in vitro and in vivo toxicity studies coupled to a physiologically based pharmacokinetic (PBPK) model. Dose-response relationships were characterized based on cell viability assays in various human cell types. A previously well-validated human PBPK model for AuNPs was applied to quantify internal concentrations in liver, kidney, skin, and venous plasma. By applying a Bayesian-based probabilistic risk assessment approach incorporating Monte Carlo simulation, probable human cell death fractions were characterized. Additionally, we implemented in vitro to in vivo and animal-to-human extrapolation approaches to independently estimate external exposure levels of AuNPs that cause minimal toxicity. Our results suggest that under the highest dosing level employed in existing animal studies (worst-case scenario), AuNPs coated with branched polyethylenimine (BPEI) would likely induce ∼90-100% cellular death, implying high cytotoxicity compared to risk prediction, and point of departure estimation of AuNP exposure for humans and illustrate an approach that could be applied to other NPs when sufficient data are available.