WorldWideScience

Sample records for sampling-rate-dependent probabilistic boolean

  1. Optimization-Based Approaches to Control of Probabilistic Boolean Networks

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2017-02-01

    Full Text Available Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs, which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.

  2. Computing preimages of Boolean networks.

    Science.gov (United States)

    Klotz, Johannes; Bossert, Martin; Schober, Steffen

    2013-01-01

    In this paper we present an algorithm based on the sum-product algorithm that finds elements in the preimage of a feed-forward Boolean networks given an output of the network. Our probabilistic method runs in linear time with respect to the number of nodes in the network. We evaluate our algorithm for randomly constructed Boolean networks and a regulatory network of Escherichia coli and found that it gives a valid solution in most cases.

  3. Quantum algorithms for testing Boolean functions

    Directory of Open Access Journals (Sweden)

    Erika Andersson

    2010-06-01

    Full Text Available We discuss quantum algorithms, based on the Bernstein-Vazirani algorithm, for finding which variables a Boolean function depends on. There are 2^n possible linear Boolean functions of n variables; given a linear Boolean function, the Bernstein-Vazirani quantum algorithm can deterministically identify which one of these Boolean functions we are given using just one single function query. The same quantum algorithm can also be used to learn which input variables other types of Boolean functions depend on, with a success probability that depends on the form of the Boolean function that is tested, but does not depend on the total number of input variables. We also outline a procedure to futher amplify the success probability, based on another quantum algorithm, the Grover search.

  4. Reliable dynamics in Boolean and continuous networks

    International Nuclear Information System (INIS)

    Ackermann, Eva; Drossel, Barbara; Peixoto, Tiago P

    2012-01-01

    We investigate the dynamical behavior of a model of robust gene regulatory networks which possess ‘entirely reliable’ trajectories. In a Boolean representation, these trajectories are characterized by being insensitive to the order in which the nodes are updated, i.e. they always go through the same sequence of states. The Boolean model for gene activity is compared with a continuous description in terms of differential equations for the concentrations of mRNA and proteins. We found that entirely reliable Boolean trajectories can be reproduced perfectly in the continuous model when realistic Hill coefficients are used. We investigate to what extent this high correspondence between Boolean and continuous trajectories depends on the extent of reliability of the Boolean trajectories, and we identify simple criteria that enable the faithful reproduction of the Boolean dynamics in the continuous description. (paper)

  5. Interpolative Boolean Networks

    Directory of Open Access Journals (Sweden)

    Vladimir Dobrić

    2017-01-01

    Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.

  6. Boolean algebra essentials

    CERN Document Server

    Solomon, Alan D

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Boolean Algebra includes set theory, sentential calculus, fundamental ideas of Boolean algebras, lattices, rings and Boolean algebras, the structure of a Boolean algebra, and Boolean

  7. Boolean reasoning the logic of boolean equations

    CERN Document Server

    Brown, Frank Markham

    2012-01-01

    A systematic treatment of Boolean reasoning, this concise, newly revised edition combines the works of early logicians with recent investigations, including previously unpublished research results. Brown begins with an overview of elementary mathematical concepts and outlines the theory of Boolean algebras. Two concluding chapters deal with applications. 1990 edition.

  8. Probabilistic Relational Structures and Their Applications

    Science.gov (United States)

    Domotor, Zoltan

    The principal objects of the investigation reported were, first, to study qualitative probability relations on Boolean algebras, and secondly, to describe applications in the theories of probability logic, information, automata, and probabilistic measurement. The main contribution of this work is stated in 10 definitions and 20 theorems. The basic…

  9. Comparison of Control Approaches in Genetic Regulatory Networks by Using Stochastic Master Equation Models, Probabilistic Boolean Network Models and Differential Equation Models and Estimated Error Analyzes

    Science.gov (United States)

    Caglar, Mehmet Umut; Pal, Ranadip

    2011-03-01

    Central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid''. However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of cell level data and probabilistic - nonlinear nature of interactions. Several models widely used to analyze and simulate these types of nonlinear interactions. Stochastic Master Equation (SME) models give probabilistic nature of the interactions in a detailed manner, with a high calculation cost. On the other hand Probabilistic Boolean Network (PBN) models give a coarse scale picture of the stochastic processes, with a less calculation cost. Differential Equation (DE) models give the time evolution of mean values of processes in a highly cost effective way. The understanding of the relations between the predictions of these models is important to understand the reliability of the simulations of genetic regulatory networks. In this work the success of the mapping between SME, PBN and DE models is analyzed and the accuracy and affectivity of the control policies generated by using PBN and DE models is compared.

  10. A short Boolean derivation of mean failure frequency for any (also non-coherent) system

    International Nuclear Information System (INIS)

    Schneeweiss, Winfrid G.

    2009-01-01

    For stationary repairable systems it is shown that the probabilistic weights for the individual components' mean failure frequencies (MFFs) that can be added to yield the system's MFF are found easily from the first step of the Boolean fault tree function's Shannon decomposition. This way one finds a general theory of a system's MFF and the case of coherence covered in standard textbooks is shown to be a subcase. Unfortunately, elegant rules for calculating system MFF from any polynomial form of the fault tree's Boolean function are only known for the coherent case, but repeated here, because they are not yet found in many textbooks. An example known from literature is treated extensively with great care.

  11. Sound Probabilistic #SAT with Projection

    Directory of Open Access Journals (Sweden)

    Vladimir Klebanov

    2016-10-01

    Full Text Available We present an improved method for a sound probabilistic estimation of the model count of a boolean formula under projection. The problem solved can be used to encode a variety of quantitative program analyses, such as concerning security of resource consumption. We implement the technique and discuss its application to quantifying information flow in programs.

  12. Monotone Boolean functions

    International Nuclear Information System (INIS)

    Korshunov, A D

    2003-01-01

    Monotone Boolean functions are an important object in discrete mathematics and mathematical cybernetics. Topics related to these functions have been actively studied for several decades. Many results have been obtained, and many papers published. However, until now there has been no sufficiently complete monograph or survey of results of investigations concerning monotone Boolean functions. The object of this survey is to present the main results on monotone Boolean functions obtained during the last 50 years

  13. Boolean integral calculus

    Science.gov (United States)

    Tucker, Jerry H.; Tapia, Moiez A.; Bennett, A. Wayne

    1988-01-01

    The concept of Boolean integration is developed, and different Boolean integral operators are introduced. Given the changes in a desired function in terms of the changes in its arguments, the ways of 'integrating' (i.e. realizing) such a function, if it exists, are presented. The necessary and sufficient conditions for integrating, in different senses, the expression specifying the changes are obtained. Boolean calculus has applications in the design of logic circuits and in fault analysis.

  14. Patterns of probabilistic evaluation of the maintenance

    International Nuclear Information System (INIS)

    Torres V, A.; Rivero O, J.J.

    2004-01-01

    The Boolean equation represented by the minimum sets of court at level of a system or of all one Probabilistic safety analysis (APS) it has been used for to evaluate the output configurations of service of teams during the exploitation. It has been carried out through applications of APS for the optimization of operational regimes. The obtaining of such boolean equations as demands of a process of high mathematical complexity for what exhaustive studies are required, even for the application of their results. The important advantages for the maintenance that they obtain of these applications they require the development of methodologies that bring near these optimization methods to the systems of management of the maintenance, and not facilitate their use for a personnel specialized in the probabilistic topics. The article presents a methodology for the preparation, solution and utilization of the results of the fault tree leaving of technological outlines. This algorithm has been automated through the MOSEG code Win to Ver 1.0 (Author)

  15. Free Boolean Topological Groups

    Directory of Open Access Journals (Sweden)

    Ol’ga Sipacheva

    2015-11-01

    Full Text Available Known and new results on free Boolean topological groups are collected. An account of the properties that these groups share with free or free Abelian topological groups and properties specific to free Boolean groups is given. Special emphasis is placed on the application of set-theoretic methods to the study of Boolean topological groups.

  16. Information encryption systems based on Boolean functions

    Directory of Open Access Journals (Sweden)

    Aureliu Zgureanu

    2011-02-01

    Full Text Available An information encryption system based on Boolean functions is proposed. Information processing is done using multidimensional matrices, performing logical operations with these matrices. At the basis of ensuring high level security of the system the complexity of solving the problem of building systems of Boolean functions that depend on many variables (tens and hundreds is set. Such systems represent the private key. It varies both during the encryption and decryption of information, and during the transition from one message to another.

  17. Properties of Boolean orthoposets

    Science.gov (United States)

    Tkadlec, Josef

    1993-10-01

    A Boolean orthoposet is the orthoposet P fulfilling the following condition: If a, b ∈ P and a ∧ b = 0, then a ⊥ b. This condition seems to be a sound generalization of distributivity in orthoposets. Also, the class of (orthomodular) Boolean orthoposets may play an interesting role in quantum logic theory. This class is wide enough and, on the other hand, enjoys some properties of Boolean algebras. In this paper we summarize results on Boolean orthoposets involving distributivity, set representation, properties of the state space, existence of Jauch-Piron states, and results concerning orthocompleteness and completion.

  18. Computational complexity of Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Korshunov, Aleksei D [Sobolev Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)

    2012-02-28

    Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.

  19. SETS, Boolean Manipulation for Network Analysis and Fault Tree Analysis

    International Nuclear Information System (INIS)

    Worrell, R.B.

    1985-01-01

    Description of problem or function - SETS is used for symbolic manipulation of set (or Boolean) equations, particularly the reduction of set equations by the application of set identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze non-coherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access through nullification of sensors in its protection system. 4. Method of solution - The SETS program is used to read, interpret, and execute the statements of a SETS user program which is an algorithm that specifies the particular manipulations to be performed and the order in which they are to occur. 5. Restrictions on the complexity of the problem - Any properly formed set equation involving the set operations of union, intersection, and complement is acceptable for processing by the SETS program. Restrictions on the size of a set equation that can be processed are not absolute but rather are related to the number of terms in the disjunctive normal form of the equation, the number of literals in the equation, etc. Nevertheless, set equations involving thousands and even hundreds of thousands of terms can be processed successfully

  20. Geometric Operators on Boolean Functions

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Falster, Peter

    In truth-functional propositional logic, any propositional formula represents a Boolean function (according to some valuation of the formula). We describe operators based on Decartes' concept of constructing coordinate systems, for translation of a propositional formula to the image of a Boolean...... function. With this image of a Boolean function corresponding to a propositional formula, we prove that the orthogonal projection operator leads to a theorem describing all rules of inference in propositional reasoning. In other words, we can capture all kinds of inference in propositional logic by means...... of a few geometric operators working on the images of Boolean functions. The operators we describe, arise from the niche area of array-based logic and have previously been tightly bound to an array-based representation of Boolean functions. We redefine the operators in an abstract form to make them...

  1. Cryptographic Boolean functions and applications

    CERN Document Server

    Cusick, Thomas W

    2009-01-01

    Boolean functions are the building blocks of symmetric cryptographic systems. Symmetrical cryptographic algorithms are fundamental tools in the design of all types of digital security systems (i.e. communications, financial and e-commerce).Cryptographic Boolean Functions and Applications is a concise reference that shows how Boolean functions are used in cryptography. Currently, practitioners who need to apply Boolean functions in the design of cryptographic algorithms and protocols need to patch together needed information from a variety of resources (books, journal articles and other sources). This book compiles the key essential information in one easy to use, step-by-step reference. Beginning with the basics of the necessary theory the book goes on to examine more technical topics, some of which are at the frontier of current research.-Serves as a complete resource for the successful design or implementation of cryptographic algorithms or protocols using Boolean functions -Provides engineers and scient...

  2. A study on the weather sampling method for probabilistic consequence analysis

    International Nuclear Information System (INIS)

    Oh, Hae Cheol

    1996-02-01

    The main task of probabilistic accident consequence analysis model is to predict the radiological situation and to provide a reliable quantitative data base for making decisions on countermeasures. The magnitude of accident consequence is depended on the characteristic of the accident and the weather coincident. In probabilistic accident consequence analysis, it is necessary to repeat the atmospheric dispersion calculation with several hundreds of weather sequences to predict the full distribution of consequences which may occur following a postulated accident release. It is desirable to select a representative sample of weather sequences from a meteorological record which is typical of the area over which the released radionuclides will disperse and which spans a sufficiently long period. The selection process is done by means of sampling techniques from a full year of hourly weather data characteristic of the plant site. In this study, the proposed Weighted importance sampling method selects proportional to the each bin size to closely approximate the true frequency distribution of weather condition at the site. The Weighted importance sampling method results in substantially less sampling uncertainty than the previous technique. The proposed technique can result in improve confidence in risk estimates

  3. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling.

    Science.gov (United States)

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  4. Probabilistic inference: Task dependency and individual differences of probability weighting revealed by hierarchical Bayesian modelling

    Directory of Open Access Journals (Sweden)

    Moritz eBoos

    2016-05-01

    Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  5. Mining TCGA data using Boolean implications.

    Directory of Open Access Journals (Sweden)

    Subarna Sinha

    Full Text Available Boolean implications (if-then rules provide a conceptually simple, uniform and highly scalable way to find associations between pairs of random variables. In this paper, we propose to use Boolean implications to find relationships between variables of different data types (mutation, copy number alteration, DNA methylation and gene expression from the glioblastoma (GBM and ovarian serous cystadenoma (OV data sets from The Cancer Genome Atlas (TCGA. We find hundreds of thousands of Boolean implications from these data sets. A direct comparison of the relationships found by Boolean implications and those found by commonly used methods for mining associations show that existing methods would miss relationships found by Boolean implications. Furthermore, many relationships exposed by Boolean implications reflect important aspects of cancer biology. Examples of our findings include cis relationships between copy number alteration, DNA methylation and expression of genes, a new hierarchy of mutations and recurrent copy number alterations, loss-of-heterozygosity of well-known tumor suppressors, and the hypermethylation phenotype associated with IDH1 mutations in GBM. The Boolean implication results used in the paper can be accessed at http://crookneck.stanford.edu/microarray/TCGANetworks/.

  6. Conversion of dependability deterministic requirements into probabilistic requirements

    International Nuclear Information System (INIS)

    Bourgade, E.; Le, P.

    1993-02-01

    This report concerns the on-going survey conducted jointly by the DAM/CCE and NRE/SR branches on the inclusion of dependability requirements in control and instrumentation projects. Its purpose is to enable a customer (the prime contractor) to convert into probabilistic terms dependability deterministic requirements expressed in the form ''a maximum permissible number of failures, of maximum duration d in a period t''. The customer shall select a confidence level for each previously defined undesirable event, by assigning a maximum probability of occurrence. Using the formulae we propose for two repair policies - constant rate or constant time - these probabilized requirements can then be transformed into equivalent failure rates. It is shown that the same formula can be used for both policies, providing certain realistic assumptions are confirmed, and that for a constant time repair policy, the correct result can always be obtained. The equivalent failure rates thus determined can be included in the specifications supplied to the contractors, who will then be able to proceed to their previsional justification. (author), 8 refs., 3 annexes

  7. Expected Number of Fixed Points in Boolean Networks with Arbitrary Topology.

    Science.gov (United States)

    Mori, Fumito; Mochizuki, Atsushi

    2017-07-14

    Boolean network models describe genetic, neural, and social dynamics in complex networks, where the dynamics depend generally on network topology. Fixed points in a genetic regulatory network are typically considered to correspond to cell types in an organism. We prove that the expected number of fixed points in a Boolean network, with Boolean functions drawn from probability distributions that are not required to be uniform or identical, is one, and is independent of network topology if only a feedback arc set satisfies a stochastic neutrality condition. We also demonstrate that the expected number is increased by the predominance of positive feedback in a cycle.

  8. To Boolean or Not To Boolean.

    Science.gov (United States)

    Hildreth, Charles R.

    1983-01-01

    This editorial addresses the issue of whether or not to provide free-text, keyword/boolean search capabilities in the information retrieval mechanisms of online public access catalogs and discusses online catalogs developed prior to 1980--keyword searching, phrase searching, and precoordination and postcoordination. (EJS)

  9. Beyond-CMOS Device Benchmarking for Boolean and Non-Boolean Logic Applications

    OpenAIRE

    Pan, Chenyun; Naeemi, Azad

    2017-01-01

    The latest results of benchmarking research are presented for a variety of beyond-CMOS charge- and spin-based devices. In addition to improving the device-level models, several new device proposals and a few majorly modified devices are investigated. Deep pipelining circuits are employed to boost the throughput of low-power devices. Furthermore, the benchmarking methodology is extended to interconnect-centric analyses and non-Boolean logic applications. In contrast to Boolean circuits, non-Bo...

  10. bayesPop: Probabilistic Population Projections

    Directory of Open Access Journals (Sweden)

    Hana Ševčíková

    2016-12-01

    Full Text Available We describe bayesPop, an R package for producing probabilistic population projections for all countries. This uses probabilistic projections of total fertility and life expectancy generated by Bayesian hierarchical models. It produces a sample from the joint posterior predictive distribution of future age- and sex-specific population counts, fertility rates and mortality rates, as well as future numbers of births and deaths. It provides graphical ways of summarizing this information, including trajectory plots and various kinds of probabilistic population pyramids. An expression language is introduced which allows the user to produce the predictive distribution of a wide variety of derived population quantities, such as the median age or the old age dependency ratio. The package produces aggregated projections for sets of countries, such as UN regions or trading blocs. The methodology has been used by the United Nations to produce their most recent official population projections for all countries, published in the World Population Prospects.

  11. bayesPop: Probabilistic Population Projections

    Science.gov (United States)

    Ševčíková, Hana; Raftery, Adrian E.

    2016-01-01

    We describe bayesPop, an R package for producing probabilistic population projections for all countries. This uses probabilistic projections of total fertility and life expectancy generated by Bayesian hierarchical models. It produces a sample from the joint posterior predictive distribution of future age- and sex-specific population counts, fertility rates and mortality rates, as well as future numbers of births and deaths. It provides graphical ways of summarizing this information, including trajectory plots and various kinds of probabilistic population pyramids. An expression language is introduced which allows the user to produce the predictive distribution of a wide variety of derived population quantities, such as the median age or the old age dependency ratio. The package produces aggregated projections for sets of countries, such as UN regions or trading blocs. The methodology has been used by the United Nations to produce their most recent official population projections for all countries, published in the World Population Prospects. PMID:28077933

  12. Probabilistic quantum cloning of a subset of linearly dependent states

    Science.gov (United States)

    Rui, Pinshu; Zhang, Wen; Liao, Yanlin; Zhang, Ziyun

    2018-02-01

    It is well known that a quantum state, secretly chosen from a certain set, can be probabilistically cloned with positive cloning efficiencies if and only if all the states in the set are linearly independent. In this paper, we focus on probabilistic quantum cloning of a subset of linearly dependent states. We show that a linearly-independent subset of linearly-dependent quantum states {| Ψ 1⟩,| Ψ 2⟩,…,| Ψ n ⟩} can be probabilistically cloned if and only if any state in the subset cannot be expressed as a linear superposition of the other states in the set {| Ψ 1⟩,| Ψ 2⟩,…,| Ψ n ⟩}. The optimal cloning efficiencies are also investigated.

  13. Probabilistic calculation for angular dependence collision

    International Nuclear Information System (INIS)

    Villarino, E.A.

    1990-01-01

    This collision probabilistic method is broadly used in cylindrical geometry (in one- or two-dimensions). It constitutes a powerful tool for the heterogeneous Response Method where, the coupling current is of the cosine type, that is, without angular dependence at azimuthal angle θ and proportional to μ (cosine of the θ polar angle). (Author) [es

  14. Algebraic partial Boolean algebras

    International Nuclear Information System (INIS)

    Smith, Derek

    2003-01-01

    Partial Boolean algebras, first studied by Kochen and Specker in the 1960s, provide the structure for Bell-Kochen-Specker theorems which deny the existence of non-contextual hidden variable theories. In this paper, we study partial Boolean algebras which are 'algebraic' in the sense that their elements have coordinates in an algebraic number field. Several of these algebras have been discussed recently in a debate on the validity of Bell-Kochen-Specker theorems in the context of finite precision measurements. The main result of this paper is that every algebraic finitely-generated partial Boolean algebra B(T) is finite when the underlying space H is three-dimensional, answering a question of Kochen and showing that Conway and Kochen's infinite algebraic partial Boolean algebra has minimum dimension. This result contrasts the existence of an infinite (non-algebraic) B(T) generated by eight elements in an abstract orthomodular lattice of height 3. We then initiate a study of higher-dimensional algebraic partial Boolean algebras. First, we describe a restriction on the determinants of the elements of B(T) that are generated by a given set T. We then show that when the generating set T consists of the rays spanning the minimal vectors in a real irreducible root lattice, B(T) is infinite just if that root lattice has an A 5 sublattice. Finally, we characterize the rays of B(T) when T consists of the rays spanning the minimal vectors of the root lattice E 8

  15. Probabilistic pipe fracture evaluations for applications to leak-rate detection

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, S; Wilkowski, G; Ghadiali, N [Battelle Columbus Labs., OH (United States)

    1993-12-31

    Stochastic pipe fracture evaluations are conducted for applications to leak-rate detection. A state-of-the-art review was first conducted to evaluate the adequacy of current deterministic models for thermo-hydraulic and elastic-plastic fracture analyses. Then a new probabilistic model was developed with the above deterministic models for structural reliability analysis of cracked piping systems and statistical characterization of crack morphology parameters, material properties of pipe, and crack location. The proposed models are then applied for computing conditional probability of failure for various nuclear piping systems in BWR and PWR plants. The PRAISE code was not used, and the probabilistic model is based on modern methods of stochastic mechanics, computationally far superior to Monte Carlo and Stratified Sampling methods used in PRAISE. 10 refs., 9 figs., 1 tab.

  16. Probabilistic pipe fracture evaluations for applications to leak-rate detection

    International Nuclear Information System (INIS)

    Rahman, S.; Wilkowski, G.; Ghadiali, N.

    1992-01-01

    Stochastic pipe fracture evaluations are conducted for applications to leak-rate detection. A state-of-the-art review was first conducted to evaluate the adequacy of current deterministic models for thermo-hydraulic and elastic-plastic fracture analyses. Then a new probabilistic model was developed with the above deterministic models for structural reliability analysis of cracked piping systems and statistical characterization of crack morphology parameters, material properties of pipe, and crack location. The proposed models are then applied for computing conditional probability of failure for various nuclear piping systems in BWR and PWR plants. The PRAISE code was not used, and the probabilistic model is based on modern methods of stochastic mechanics, computationally far superior to Monte Carlo and Stratified Sampling methods used in PRAISE. 10 refs., 9 figs., 1 tab

  17. Exact sampling from conditional Boolean models with applications to maximum likelihood inference

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Zwet, van E.W.

    2001-01-01

    We are interested in estimating the intensity parameter of a Boolean model of discs (the bombing model) from a single realization. To do so, we derive the conditional distribution of the points (germs) of the underlying Poisson process. We demonstrate how to apply coupling from the past to generate

  18. GLOBAL CONVERGENCE FOR THE XOR BOOLEAN NETWORKS

    OpenAIRE

    Ho, Juei-Ling

    2009-01-01

    Shih and Ho have proved a global convergent theorem for boolean network: if a map from $\\{0,1\\}^{n}$ to itself defines a boolean network has the conditions: (1) each column of the discrete Jacobian matrix of each element of $\\{0,1\\}^{n}$ is either a unit vector or a zero vector; (2) all the boolean eigenvalues of the discrete Jacobian matrix of this map evaluated at each element of $\\{0,1\\}^{n}$ are zero, then it has a unique fixed point and this boolean network is global convergent to the fi...

  19. Patterns of probabilistic evaluation of the maintenance; Patrones de Evaluacion Probabilista del Mantenimiento

    Energy Technology Data Exchange (ETDEWEB)

    Torres V, A.; Rivero O, J.J. [Dpto. Ingenieria Nuclear, Instituto Superior de Tecnologias y Ciencias Aplicadas, Ave. Salvador Allende y Luaces, Quinta de los Molinos, Plaza, Ciudad Habana (Cuba)]. e-mail: atorres@fctn.isctn.edu.cu

    2004-07-01

    The Boolean equation represented by the minimum sets of court at level of a system or of all one Probabilistic safety analysis (APS) it has been used for to evaluate the output configurations of service of teams during the exploitation. It has been carried out through applications of APS for the optimization of operational regimes. The obtaining of such boolean equations as demands of a process of high mathematical complexity for what exhaustive studies are required, even for the application of their results. The important advantages for the maintenance that they obtain of these applications they require the development of methodologies that bring near these optimization methods to the systems of management of the maintenance, and not facilitate their use for a personnel specialized in the probabilistic topics. The article presents a methodology for the preparation, solution and utilization of the results of the fault tree leaving of technological outlines. This algorithm has been automated through the MOSEG code Win to Ver 1.0 (Author)

  20. Boolean algebra

    CERN Document Server

    Goodstein, R L

    2007-01-01

    This elementary treatment by a distinguished mathematician employs Boolean algebra as a simple medium for introducing important concepts of modern algebra. Numerous examples appear throughout the text, plus full solutions.

  1. Age-Specific Mortality and Fertility Rates for Probabilistic Population Projections

    OpenAIRE

    Ševčíková, Hana; Li, Nan; Kantorová, Vladimíra; Gerland, Patrick; Raftery, Adrian E.

    2015-01-01

    The United Nations released official probabilistic population projections (PPP) for all countries for the first time in July 2014. These were obtained by projecting the period total fertility rate (TFR) and life expectancy at birth ($e_0$) using Bayesian hierarchical models, yielding a large set of future trajectories of TFR and $e_0$ for all countries and future time periods to 2100, sampled from their joint predictive distribution. Each trajectory was then converted to age-specific mortalit...

  2. Types of non-probabilistic sampling used in marketing research. „Snowball” sampling

    OpenAIRE

    Manuela Rozalia Gabor

    2007-01-01

    A significant way of investigating a firm’s market is the statistical sampling. The sampling typology provides a non / probabilistic models of gathering information and this paper describes thorough information related to network sampling, named “snowball” sampling. This type of sampling enables the survey of occurrence forms concerning the decision power within an organisation and of the interpersonal relation network governing a certain collectivity, a certain consumer panel. The snowball s...

  3. Rational Verification in Iterated Electric Boolean Games

    Directory of Open Access Journals (Sweden)

    Youssouf Oualhadj

    2016-07-01

    Full Text Available Electric boolean games are compact representations of games where the players have qualitative objectives described by LTL formulae and have limited resources. We study the complexity of several decision problems related to the analysis of rationality in electric boolean games with LTL objectives. In particular, we report that the problem of deciding whether a profile is a Nash equilibrium in an iterated electric boolean game is no harder than in iterated boolean games without resource bounds. We show that it is a PSPACE-complete problem. As a corollary, we obtain that both rational elimination and rational construction of Nash equilibria by a supervising authority are PSPACE-complete problems.

  4. Optimal stabilization of Boolean networks through collective influence

    Science.gov (United States)

    Wang, Jiannan; Pei, Sen; Wei, Wei; Feng, Xiangnan; Zheng, Zhiming

    2018-03-01

    Boolean networks have attracted much attention due to their wide applications in describing dynamics of biological systems. During past decades, much effort has been invested in unveiling how network structure and update rules affect the stability of Boolean networks. In this paper, we aim to identify and control a minimal set of influential nodes that is capable of stabilizing an unstable Boolean network. For locally treelike Boolean networks with biased truth tables, we propose a greedy algorithm to identify influential nodes in Boolean networks by minimizing the largest eigenvalue of a modified nonbacktracking matrix. We test the performance of the proposed collective influence algorithm on four different networks. Results show that the collective influence algorithm can stabilize each network with a smaller set of nodes compared with other heuristic algorithms. Our work provides a new insight into the mechanism that determines the stability of Boolean networks, which may find applications in identifying virulence genes that lead to serious diseases.

  5. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    Science.gov (United States)

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Liang Jinghang

    2012-08-01

    Full Text Available Abstract Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs. As a logical model, probabilistic Boolean networks (PBNs consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n or O(nN2n for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN. An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n, where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a

  7. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  8. Boolean orthoposets and two-valued states on them

    Science.gov (United States)

    Tkadlec, Josef

    1992-06-01

    A Boolean orthoposet (see e.g. [2]) is the orthoposet P fulfilling the following condition: If a, b ∈ P and a ∧ b = 0 then a⊥ b. This condition seems to be a sound generalization of distributivity in orthoposets (see e.g. [8]). Also, the class of (orthomodular) Boolean orthoposets may play an interesting role in quantum logic theory. This class is wide enough (see [4,3]) and on the other hand, enjoys some properties of Boolean algebras [4,8,5]. In quantum logic theory an important role is played by so-called Jauch-Piron states [1,6,7]. In this paper we clarify the connection between Boolean orthoposets and orthoposets with "enough" two-valued Jauch-Piron states. Further, we obtain a characterization of Boolean orthoposets in terms of two-valued states.

  9. Boolean comparative analysis of qualitative data : a methodological note

    NARCIS (Netherlands)

    Romme, A.G.L.

    1995-01-01

    This paper explores the use of Boolean logic in the analysis of qualitative data, especially on the basis of so-called process theories. Process theories treat independent variables as necessary conditions which are binary rather than variable in nature, while the dependent variable is a final

  10. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.

    Science.gov (United States)

    Pecevski, Dejan; Maass, Wolfgang

    2016-01-01

    Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.

  11. Boolean integral calculus for digital systems

    Science.gov (United States)

    Tucker, J. H.; Tapia, M. A.; Bennett, A. W.

    1985-01-01

    The concept of Boolean integration is introduced and developed. When the changes in a desired function are specified in terms of changes in its arguments, then ways of 'integrating' (i.e., realizing) the function, if it exists, are presented. Boolean integral calculus has applications in design of logic circuits.

  12. Solving probabilistic inverse problems rapidly with prior samples

    NARCIS (Netherlands)

    Käufl, Paul; Valentine, Andrew P.; de Wit, Ralph W.; Trampert, Jeannot

    2016-01-01

    Owing to the increasing availability of computational resources, in recent years the probabilistic solution of non-linear, geophysical inverse problems by means of sampling methods has become increasingly feasible. Nevertheless, we still face situations in which a Monte Carlo approach is not

  13. Synchronization in an array of coupled Boolean networks

    International Nuclear Information System (INIS)

    Li, Rui; Chu, Tianguang

    2012-01-01

    This Letter presents an analytical study of synchronization in an array of coupled deterministic Boolean networks. A necessary and sufficient criterion for synchronization is established based on algebraic representations of logical dynamics in terms of the semi-tensor product of matrices. Some basic properties of a synchronized array of Boolean networks are then derived for the existence of transient states and the upper bound of the number of fixed points. Particularly, an interesting consequence indicates that a “large” mismatch between two coupled Boolean networks in the array may result in loss of synchrony in the entire system. Examples, including the Boolean model of coupled oscillations in the cell cycle, are given to illustrate the present results. -- Highlights: ► We analytically study synchronization in an array of coupled Boolean networks. ► The study is based on the algebraic representations of logical dynamics. ► A necessary and sufficient algebraic criterion for synchronization is established. ► It reveals some basic properties of a synchronized array of Boolean networks. ► A large mismatch between two coupled networks may result in the loss of synchrony.

  14. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    International Nuclear Information System (INIS)

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  15. Mechanical system reliability analysis using a combination of graph theory and Boolean function

    International Nuclear Information System (INIS)

    Tang, J.

    2001-01-01

    A new method based on graph theory and Boolean function for assessing reliability of mechanical systems is proposed. The procedure for this approach consists of two parts. By using the graph theory, the formula for the reliability of a mechanical system that considers the interrelations of subsystems or components is generated. Use of the Boolean function to examine the failure interactions of two particular elements of the system, followed with demonstrations of how to incorporate such failure dependencies into the analysis of larger systems, a constructive algorithm for quantifying the genuine interconnections between the subsystems or components is provided. The combination of graph theory and Boolean function provides an effective way to evaluate the reliability of a large, complex mechanical system. A numerical example demonstrates that this method an effective approaches in system reliability analysis

  16. Consideration of aging in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Titina, B.; Cepin, M.

    2007-01-01

    Probabilistic safety assessment is a standardised tool for assessment of safety of nuclear power plants. It is a complement to the safety analyses. Standard probabilistic models of safety equipment assume component failure rate as a constant. Ageing of systems, structures and components can theoretically be included in new age-dependent probabilistic safety assessment, which generally causes the failure rate to be a function of age. New age-dependent probabilistic safety assessment models, which offer explicit calculation of the ageing effects, are developed. Several groups of components are considered which require their unique models: e.g. operating components e.g. stand-by components. The developed models on the component level are inserted into the models of the probabilistic safety assessment in order that the ageing effects are evaluated for complete systems. The preliminary results show that the lack of necessary data for consideration of ageing causes highly uncertain models and consequently the results. (author)

  17. On Boolean functions with generalized cryptographic properties

    NARCIS (Netherlands)

    Braeken, A.; Nikov, V.S.; Nikova, S.I.; Preneel, B.; Canteaut, A.; Viswanathan, K.

    2004-01-01

    By considering a new metric, we generalize cryptographic properties of Boolean functions such as resiliency and propagation characteristics. These new definitions result in a better understanding of the properties of Boolean functions and provide a better insight in the space defined by this metric.

  18. Evolutionary Algorithms for Boolean Functions in Diverse Domains of Cryptography.

    Science.gov (United States)

    Picek, Stjepan; Carlet, Claude; Guilley, Sylvain; Miller, Julian F; Jakobovic, Domagoj

    2016-01-01

    The role of Boolean functions is prominent in several areas including cryptography, sequences, and coding theory. Therefore, various methods for the construction of Boolean functions with desired properties are of direct interest. New motivations on the role of Boolean functions in cryptography with attendant new properties have emerged over the years. There are still many combinations of design criteria left unexplored and in this matter evolutionary computation can play a distinct role. This article concentrates on two scenarios for the use of Boolean functions in cryptography. The first uses Boolean functions as the source of the nonlinearity in filter and combiner generators. Although relatively well explored using evolutionary algorithms, it still presents an interesting goal in terms of the practical sizes of Boolean functions. The second scenario appeared rather recently where the objective is to find Boolean functions that have various orders of the correlation immunity and minimal Hamming weight. In both these scenarios we see that evolutionary algorithms are able to find high-quality solutions where genetic programming performs the best.

  19. On the Computation of Comprehensive Boolean Gröbner Bases

    Science.gov (United States)

    Inoue, Shutaro

    We show that a comprehensive Boolean Gröbner basis of an ideal I in a Boolean polynomial ring B (bar A,bar X) with main variables bar X and parameters bar A can be obtained by simply computing a usual Boolean Gröbner basis of I regarding both bar X and bar A as variables with a certain block term order such that bar X ≫ bar A. The result together with a fact that a finite Boolean ring is isomorphic to a direct product of the Galois field mathbb{GF}_2 enables us to compute a comprehensive Boolean Gröbner basis by only computing corresponding Gröbner bases in a polynomial ring over mathbb{GF}_2. Our implementation in a computer algebra system Risa/Asir shows that our method is extremely efficient comparing with existing computation algorithms of comprehensive Boolean Gröbner bases.

  20. An adaptable Boolean net trainable to control a computing robot

    International Nuclear Information System (INIS)

    Lauria, F. E.; Prevete, R.; Milo, M.; Visco, S.

    1999-01-01

    We discuss a method to implement in a Boolean neural network a Hebbian rule so to obtain an adaptable universal control system. We start by presenting both the Boolean neural net and the Hebbian rule we have considered. Then we discuss, first, the problems arising when the latter is naively implemented in a Boolean neural net, second, the method consenting us to overcome them and the ensuing adaptable Boolean neural net paradigm. Next, we present the adaptable Boolean neural net as an intelligent control system, actually controlling a writing robot, and discuss how to train it in the execution of the elementary arithmetic operations on operands represented by numerals with an arbitrary number of digits

  1. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123

    Science.gov (United States)

    Pecevski, Dejan

    2016-01-01

    Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214

  2. Numeracy of multiple sclerosis patients: A comparison of patients from the PERCEPT study to a German probabilistic sample.

    Science.gov (United States)

    Gaissmaier, Wolfgang; Giese, Helge; Galesic, Mirta; Garcia-Retamero, Rocio; Kasper, Juergen; Kleiter, Ingo; Meuth, Sven G; Köpke, Sascha; Heesen, Christoph

    2018-01-01

    A shared decision-making approach is suggested for multiple sclerosis (MS) patients. To properly evaluate benefits and risks of different treatment options accordingly, MS patients require sufficient numeracy - the ability to understand quantitative information. It is unknown whether MS affects numeracy. Therefore, we investigated whether patients' numeracy was impaired compared to a probabilistic national sample. As part of the larger prospective, observational, multicenter study PERCEPT, we assessed numeracy for a clinical study sample of German MS patients (N=725) with a standard test and compared them to a German probabilistic sample (N=1001), controlling for age, sex, and education. Within patients, we assessed whether disease variables (disease duration, disability, annual relapse rate, cognitive impairment) predicted numeracy beyond these demographics. MS patients showed a comparable level of numeracy as the probabilistic national sample (68.9% vs. 68.5% correct answers, P=0.831). In both samples, numeracy was higher for men and the highly educated. Disease variables did not predict numeracy beyond demographics within patients, and predictability was generally low. This sample of MS patients understood quantitative information on the same level as the general population. There is no reason to withhold quantitative information from MS patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya

    2012-05-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.

  4. Boolean modeling in systems biology: an overview of methodology and applications

    International Nuclear Information System (INIS)

    Wang, Rui-Sheng; Albert, Réka; Saadatpour, Assieh

    2012-01-01

    Mathematical modeling of biological processes provides deep insights into complex cellular systems. While quantitative and continuous models such as differential equations have been widely used, their use is obstructed in systems wherein the knowledge of mechanistic details and kinetic parameters is scarce. On the other hand, a wealth of molecular level qualitative data on individual components and interactions can be obtained from the experimental literature and high-throughput technologies, making qualitative approaches such as Boolean network modeling extremely useful. In this paper, we build on our research to provide a methodology overview of Boolean modeling in systems biology, including Boolean dynamic modeling of cellular networks, attractor analysis of Boolean dynamic models, as well as inferring biological regulatory mechanisms from high-throughput data using Boolean models. We finally demonstrate how Boolean models can be applied to perform the structural analysis of cellular networks. This overview aims to acquaint life science researchers with the basic steps of Boolean modeling and its applications in several areas of systems biology. (paper)

  5. Algebraic model checking for Boolean gene regulatory networks.

    Science.gov (United States)

    Tran, Quoc-Nam

    2011-01-01

    We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.

  6. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  7. Efficient Instantiation of Parameterised Boolean Equation Systems to Parity Games

    Directory of Open Access Journals (Sweden)

    Gijs Kant

    2012-10-01

    Full Text Available Parameterised Boolean Equation Systems (PBESs are sequences of Boolean fixed point equations with data variables, used for, e.g., verification of modal mu-calculus formulae for process algebraic specifications with data. Solving a PBES is usually done by instantiation to a Parity Game and then solving the game. Practical game solvers exist, but the instantiation step is the bottleneck. We enhance the instantiation in two steps. First, we transform the PBES to a Parameterised Parity Game (PPG, a PBES with each equation either conjunctive or disjunctive. Then we use LTSmin, that offers transition caching, efficient storage of states and both distributed and symbolic state space generation, for generating the game graph. To that end we define a language module for LTSmin, consisting of an encoding of variables with parameters into state vectors, a grouped transition relation and a dependency matrix to indicate the dependencies between parts of the state vector and transition groups. Benchmarks on some large case studies, show that the method speeds up the instantiation significantly and decreases memory usage drastically.

  8. Boolean gates on actin filaments

    International Nuclear Information System (INIS)

    Siccardi, Stefano; Tuszynski, Jack A.; Adamatzky, Andrew

    2016-01-01

    Actin is a globular protein which forms long polar filaments in the eukaryotic cytoskeleton. Actin networks play a key role in cell mechanics and cell motility. They have also been implicated in information transmission and processing, memory and learning in neuronal cells. The actin filaments have been shown to support propagation of voltage pulses. Here we apply a coupled nonlinear transmission line model of actin filaments to study interactions between voltage pulses. To represent digital information we assign a logical TRUTH value to the presence of a voltage pulse in a given location of the actin filament, and FALSE to the pulse's absence, so that information flows along the filament with pulse transmission. When two pulses, representing Boolean values of input variables, interact, then they can facilitate or inhibit further propagation of each other. We explore this phenomenon to construct Boolean logical gates and a one-bit half-adder with interacting voltage pulses. We discuss implications of these findings on cellular process and technological applications. - Highlights: • We simulate interaction between voltage pulses using on actin filaments. • We use a coupled nonlinear transmission line model. • We design Boolean logical gates via interactions between the voltage pulses. • We construct one-bit half-adder with interacting voltage pulses.

  9. Boolean gates on actin filaments

    Energy Technology Data Exchange (ETDEWEB)

    Siccardi, Stefano, E-mail: ssiccardi@2ssas.it [The Unconventional Computing Centre, University of the West of England, Bristol (United Kingdom); Tuszynski, Jack A., E-mail: jackt@ualberta.ca [Department of Oncology, University of Alberta, Edmonton, Alberta (Canada); Adamatzky, Andrew, E-mail: andrew.adamatzky@uwe.ac.uk [The Unconventional Computing Centre, University of the West of England, Bristol (United Kingdom)

    2016-01-08

    Actin is a globular protein which forms long polar filaments in the eukaryotic cytoskeleton. Actin networks play a key role in cell mechanics and cell motility. They have also been implicated in information transmission and processing, memory and learning in neuronal cells. The actin filaments have been shown to support propagation of voltage pulses. Here we apply a coupled nonlinear transmission line model of actin filaments to study interactions between voltage pulses. To represent digital information we assign a logical TRUTH value to the presence of a voltage pulse in a given location of the actin filament, and FALSE to the pulse's absence, so that information flows along the filament with pulse transmission. When two pulses, representing Boolean values of input variables, interact, then they can facilitate or inhibit further propagation of each other. We explore this phenomenon to construct Boolean logical gates and a one-bit half-adder with interacting voltage pulses. We discuss implications of these findings on cellular process and technological applications. - Highlights: • We simulate interaction between voltage pulses using on actin filaments. • We use a coupled nonlinear transmission line model. • We design Boolean logical gates via interactions between the voltage pulses. • We construct one-bit half-adder with interacting voltage pulses.

  10. Probabilistic Modeling of the Fatigue Crack Growth Rate for Ni-base Alloy X-750

    International Nuclear Information System (INIS)

    Yoon, J.Y.; Nam, H.O.; Hwang, I.S.; Lee, T.H.

    2012-01-01

    Extending the operating life of existing nuclear power plants (NPP's) beyond 60 years. Many aging problems of passive components such as PWSCC, IASCC, FAC and Corrosion Fatigue; Safety analysis: Deterministic analysis + Probabilistic analysis; Many uncertainties of parameters or relationship in general probabilistic analysis such as probabilistic safety assessment (PSA); Bayesian inference: Decreasing uncertainties by updating unknown parameter; Ensuring the reliability of passive components (e.g. pipes) as well as active components (e.g. valve, pump) in NPP's; Developing probabilistic model for failures; Updating the fatigue crack growth rate (FCGR)

  11. A New Characterization of ACC0 and Probabilistic CC0

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Koucký, Michal

    2010-01-01

    Barrington, Straubing & Thérien (1990) conjectured that the Boolean And function can not be computed by polynomial size constant depth circuits built from modular counting gates, i.e., by CC0 circuits. In this work we show that the And function can be computed by uniform probabilistic CC0 circuits...... that use only O(log n) random bits. This may be viewed as evidence contrary to the conjecture. As a consequence of our construction we get that all of ACC0 can be computed by probabilistic CC0 circuits that use only O(log n) random bits. Thus, if one were able to derandomize such circuits, one would obtain...... a collapse of circuit classes giving ACC0 = CC0. We present a derandomization of probabilistic CC0 circuits using And and Or gates to obtain ACC0 = And ο Or ο CC0 = Or ο And ο CC0. (And and Or gates of sublinear fan-in suffice in non-uniform setting.) Both these results hold for uniform as well as non...

  12. Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.

    Science.gov (United States)

    Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence

    2012-08-29

    Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential

  13. Large Sets in Boolean and Non-Boolean Groups and Topology

    Directory of Open Access Journals (Sweden)

    Ol’ga V. Sipacheva

    2017-10-01

    Full Text Available Various notions of large sets in groups, including the classical notions of thick, syndetic, and piecewise syndetic sets and the new notion of vast sets in groups, are studied with emphasis on the interplay between such sets in Boolean groups. Natural topologies closely related to vast sets are considered; as a byproduct, interesting relations between vast sets and ultrafilters are revealed.

  14. Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, Scott [Applied Biomathematics, Setauket, NY (United States); Nelsen, Roger B. [Lewis & Clark College, Portland OR (United States); Hajagos, Janos [Applied Biomathematics, Setauket, NY (United States); Berleant, Daniel J. [Iowa State Univ., Ames, IA (United States); Zhang, Jianzhong [Iowa State Univ., Ames, IA (United States); Tucker, W. Troy [Applied Biomathematics, Setauket, NY (United States); Ginzburg, Lev R. [Applied Biomathematics, Setauket, NY (United States); Oberkampf, William L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-05-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  15. Boolean-Valued Belief Functions

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    2002-01-01

    Roč. 31, č. 2 (2002), s. 153-181 ISSN 0308-1079 R&D Projects: GA AV ČR IAA1030803 Institutional research plan: AV0Z1030915 Keywords : Dempster-Schafer theory * Boolean algebra Subject RIV: BA - General Mathematics Impact factor: 0.241, year: 2002

  16. Causal structure of oscillations in gene regulatory networks: Boolean analysis of ordinary differential equation attractors.

    Science.gov (United States)

    Sun, Mengyang; Cheng, Xianrui; Socolar, Joshua E S

    2013-06-01

    A common approach to the modeling of gene regulatory networks is to represent activating or repressing interactions using ordinary differential equations for target gene concentrations that include Hill function dependences on regulator gene concentrations. An alternative formulation represents the same interactions using Boolean logic with time delays associated with each network link. We consider the attractors that emerge from the two types of models in the case of a simple but nontrivial network: a figure-8 network with one positive and one negative feedback loop. We show that the different modeling approaches give rise to the same qualitative set of attractors with the exception of a possible fixed point in the ordinary differential equation model in which concentrations sit at intermediate values. The properties of the attractors are most easily understood from the Boolean perspective, suggesting that time-delay Boolean modeling is a useful tool for understanding the logic of regulatory networks.

  17. Nonlinear threshold Boolean automata networks and phase transitions

    OpenAIRE

    Demongeot, Jacques; Sené, Sylvain

    2010-01-01

    In this report, we present a formal approach that addresses the problem of emergence of phase transitions in stochastic and attractive nonlinear threshold Boolean automata networks. Nonlinear networks considered are informally defined on the basis of classical stochastic threshold Boolean automata networks in which specific interaction potentials of neighbourhood coalition are taken into account. More precisely, specific nonlinear terms compose local transition functions that define locally t...

  18. Complex network analysis of state spaces for random Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Shreim, Amer [Complexity Science Group, Department of Physics and Astronomy, University of Calgary, Calgary, AB, T2N 1N4 (Canada); Berdahl, Andrew [Complexity Science Group, Department of Physics and Astronomy, University of Calgary, Calgary, AB, T2N 1N4 (Canada); Sood, Vishal [Complexity Science Group, Department of Physics and Astronomy, University of Calgary, Calgary, AB, T2N 1N4 (Canada); Grassberger, Peter [Complexity Science Group, Department of Physics and Astronomy, University of Calgary, Calgary, AB, T2N 1N4 (Canada); Paczuski, Maya [Complexity Science Group, Department of Physics and Astronomy, University of Calgary, Calgary, AB, T2N 1N4 (Canada)

    2008-01-15

    We apply complex network analysis to the state spaces of random Boolean networks (RBNs). An RBN contains N Boolean elements each with K inputs. A directed state space network (SSN) is constructed by linking each dynamical state, represented as a node, to its temporal successor. We study the heterogeneity of these SSNs at both local and global scales, as well as sample to-sample fluctuations within an ensemble of SSNs. We use in-degrees of nodes as a local topological measure, and the path diversity (Shreim A et al 2007 Phys. Rev. Lett. 98 198701) of an SSN as a global topological measure. RBNs with 2 {<=} K {<=} 5 exhibit non-trivial fluctuations at both local and global scales, while K = 2 exhibits the largest sample-to-sample (possibly non-self-averaging) fluctuations. We interpret the observed 'multi scale' fluctuations in the SSNs as indicative of the criticality and complexity of K = 2 RBNs. 'Garden of Eden' (GoE) states are nodes on an SSN that have in-degree zero. While in-degrees of non-GoE nodes for K > 1 SSNs can assume any integer value between 0 and 2{sup N}, for K = 1 all the non-GoE nodes in a given SSN have the same in-degree which is always a power of two.

  19. Complex network analysis of state spaces for random Boolean networks

    International Nuclear Information System (INIS)

    Shreim, Amer; Berdahl, Andrew; Sood, Vishal; Grassberger, Peter; Paczuski, Maya

    2008-01-01

    We apply complex network analysis to the state spaces of random Boolean networks (RBNs). An RBN contains N Boolean elements each with K inputs. A directed state space network (SSN) is constructed by linking each dynamical state, represented as a node, to its temporal successor. We study the heterogeneity of these SSNs at both local and global scales, as well as sample to-sample fluctuations within an ensemble of SSNs. We use in-degrees of nodes as a local topological measure, and the path diversity (Shreim A et al 2007 Phys. Rev. Lett. 98 198701) of an SSN as a global topological measure. RBNs with 2 ≤ K ≤ 5 exhibit non-trivial fluctuations at both local and global scales, while K = 2 exhibits the largest sample-to-sample (possibly non-self-averaging) fluctuations. We interpret the observed 'multi scale' fluctuations in the SSNs as indicative of the criticality and complexity of K = 2 RBNs. 'Garden of Eden' (GoE) states are nodes on an SSN that have in-degree zero. While in-degrees of non-GoE nodes for K > 1 SSNs can assume any integer value between 0 and 2 N , for K = 1 all the non-GoE nodes in a given SSN have the same in-degree which is always a power of two

  20. PROBABILISTIC INTEREST RATE SETTING WITH A SHADOW BOARD: A DESCRIPTION OF THE PILOT PROJECT

    OpenAIRE

    TIMO HENCKEL; SHAUN VAHEY; LIZ WAKERLY

    2011-01-01

    This study aims to assess the scope for monetary policymakers to aggregate probabilistic interest rate advice. The members of a Shadow Board give probabilistic assessments of the appropriate (target) interest rate for Australia in real time. The pilot project will be running each month from August to December (inclusive) 2011, with the Shadow Board giving advice shortly before each decision by the Reserve Bank of Australia (RBA) Board.

  1. Optical programmable Boolean logic unit.

    Science.gov (United States)

    Chattopadhyay, Tanay

    2011-11-10

    Logic units are the building blocks of many important computational operations likes arithmetic, multiplexer-demultiplexer, radix conversion, parity checker cum generator, etc. Multifunctional logic operation is very much essential in this respect. Here a programmable Boolean logic unit is proposed that can perform 16 Boolean logical operations from a single optical input according to the programming input without changing the circuit design. This circuit has two outputs. One output is complementary to the other. Hence no loss of data can occur. The circuit is basically designed by a 2×2 polarization independent optical cross bar switch. Performance of the proposed circuit has been achieved by doing numerical simulations. The binary logical states (0,1) are represented by the absence of light (null) and presence of light, respectively.

  2. Griffin: A Tool for Symbolic Inference of Synchronous Boolean Molecular Networks

    Directory of Open Access Journals (Sweden)

    Stalin Muñoz

    2018-03-01

    Full Text Available Boolean networks are important models of biochemical systems, located at the high end of the abstraction spectrum. A number of Boolean gene networks have been inferred following essentially the same method. Such a method first considers experimental data for a typically underdetermined “regulation” graph. Next, Boolean networks are inferred by using biological constraints to narrow the search space, such as a desired set of (fixed-point or cyclic attractors. We describe Griffin, a computer tool enhancing this method. Griffin incorporates a number of well-established algorithms, such as Dubrova and Teslenko's algorithm for finding attractors in synchronous Boolean networks. In addition, a formal definition of regulation allows Griffin to employ “symbolic” techniques, able to represent both large sets of network states and Boolean constraints. We observe that when the set of attractors is required to be an exact set, prohibiting additional attractors, a naive Boolean coding of this constraint may be unfeasible. Such cases may be intractable even with symbolic methods, as the number of Boolean constraints may be astronomically large. To overcome this problem, we employ an Artificial Intelligence technique known as “clause learning” considerably increasing Griffin's scalability. Without clause learning only toy examples prohibiting additional attractors are solvable: only one out of seven queries reported here is answered. With clause learning, by contrast, all seven queries are answered. We illustrate Griffin with three case studies drawn from the Arabidopsis thaliana literature. Griffin is available at: http://turing.iimas.unam.mx/griffin.

  3. The mathematics of a quantum Hamiltonian computing half adder Boolean logic gate

    International Nuclear Information System (INIS)

    Dridi, G; Julien, R; Hliwa, M; Joachim, C

    2015-01-01

    The mathematics behind the quantum Hamiltonian computing (QHC) approach of designing Boolean logic gates with a quantum system are given. Using the quantum eigenvalue repulsion effect, the QHC AND, NAND, OR, NOR, XOR, and NXOR Hamiltonian Boolean matrices are constructed. This is applied to the construction of a QHC half adder Hamiltonian matrix requiring only six quantum states to fullfil a half Boolean logical truth table. The QHC design rules open a nano-architectronic way of constructing Boolean logic gates inside a single molecule or atom by atom at the surface of a passivated semi-conductor. (paper)

  4. The mathematics of a quantum Hamiltonian computing half adder Boolean logic gate.

    Science.gov (United States)

    Dridi, G; Julien, R; Hliwa, M; Joachim, C

    2015-08-28

    The mathematics behind the quantum Hamiltonian computing (QHC) approach of designing Boolean logic gates with a quantum system are given. Using the quantum eigenvalue repulsion effect, the QHC AND, NAND, OR, NOR, XOR, and NXOR Hamiltonian Boolean matrices are constructed. This is applied to the construction of a QHC half adder Hamiltonian matrix requiring only six quantum states to fullfil a half Boolean logical truth table. The QHC design rules open a nano-architectronic way of constructing Boolean logic gates inside a single molecule or atom by atom at the surface of a passivated semi-conductor.

  5. Probabilistic estimation of the constitutive parameters of polymers

    Directory of Open Access Journals (Sweden)

    Siviour C.R.

    2012-08-01

    Full Text Available The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.

  6. Efficient Instantiation of Parameterised Boolean Equation Systems to Parity Games

    NARCIS (Netherlands)

    Kant, Gijs; van de Pol, Jan Cornelis; Wijs, A.J.; Bošnački, D.; Edelkamp, S.

    Parameterised Boolean Equation Systems (PBESs) are sequences of Boolean fixed point equations with data variables, used for, e.g., verification of modal μ-calculus formulae for process algebraic specifications with data. Solving a PBES is usually done by instantiation to a Parity Game and then

  7. Summing Boolean Algebras

    Institute of Scientific and Technical Information of China (English)

    Antonio AIZPURU; Antonio GUTI(E)RREZ-D(A)VILA

    2004-01-01

    In this paper we will study some families and subalgebras ( ) of ( )(N) that let us characterize the unconditional convergence of series through the weak convergence of subseries ∑i∈A xi, A ∈ ( ).As a consequence, we obtain a new version of the Orlicz-Pettis theorem, for Banach spaces. We also study some relationships between algebraic properties of Boolean algebras and topological properties of the corresponding Stone spaces.

  8. Risk analysis of radioactive waste repository based on the time dependent hazard rate

    International Nuclear Information System (INIS)

    Chang, S.H.; Cho, W.J.

    1984-01-01

    For the probabilistic risk analysis of the radioactive high level waste repository, the simplified method based on the time dependent hazard rate is proposed. The obtained results are compared with those from the time independent hazard rate. The estimation of the failure probability of the waste repository through this method gives more conservative results, especially when the half-life of nuclide is larger and retardation factors of nuclide is smaller. (Auth.)

  9. Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach.

    Science.gov (United States)

    Zhang, Jianming; Sclaroff, Stan

    2016-05-01

    We demonstrate the usefulness of surroundedness for eye fixation prediction by proposing a Boolean Map based Saliency model (BMS). In our formulation, an image is characterized by a set of binary images, which are generated by randomly thresholding the image's feature maps in a whitened feature space. Based on a Gestalt principle of figure-ground segregation, BMS computes a saliency map by discovering surrounded regions via topological analysis of Boolean maps. Furthermore, we draw a connection between BMS and the Minimum Barrier Distance to provide insight into why and how BMS can properly captures the surroundedness cue via Boolean maps. The strength of BMS is verified by its simplicity, efficiency and superior performance compared with 10 state-of-the-art methods on seven eye tracking benchmark datasets.

  10. Minimum energy control and optimal-satisfactory control of Boolean control network

    International Nuclear Information System (INIS)

    Li, Fangfei; Lu, Xiwen

    2013-01-01

    In the literatures, to transfer the Boolean control network from the initial state to the desired state, the expenditure of energy has been rarely considered. Motivated by this, this Letter investigates the minimum energy control and optimal-satisfactory control of Boolean control network. Based on the semi-tensor product of matrices and Floyd's algorithm, minimum energy, constrained minimum energy and optimal-satisfactory control design for Boolean control network are given respectively. A numerical example is presented to illustrate the efficiency of the obtained results.

  11. Autonomous Modeling, Statistical Complexity and Semi-annealed Treatment of Boolean Networks

    Science.gov (United States)

    Gong, Xinwei

    This dissertation presents three studies on Boolean networks. Boolean networks are a class of mathematical systems consisting of interacting elements with binary state variables. Each element is a node with a Boolean logic gate, and the presence of interactions between any two nodes is represented by directed links. Boolean networks that implement the logic structures of real systems are studied as coarse-grained models of the real systems. Large random Boolean networks are studied with mean field approximations and used to provide a baseline of possible behaviors of large real systems. This dissertation presents one study of the former type, concerning the stable oscillation of a yeast cell-cycle oscillator, and two studies of the latter type, respectively concerning the statistical complexity of large random Boolean networks and an extension of traditional mean field techniques that accounts for the presence of short loops. In the cell-cycle oscillator study, a novel autonomous update scheme is introduced to study the stability of oscillations in small networks. A motif that corrects pulse-growing perturbations and a motif that grows pulses are identified. A combination of the two motifs is capable of sustaining stable oscillations. Examining a Boolean model of the yeast cell-cycle oscillator using an autonomous update scheme yields evidence that it is endowed with such a combination. Random Boolean networks are classified as ordered, critical or disordered based on their response to small perturbations. In the second study, random Boolean networks are taken as prototypical cases for the evaluation of two measures of complexity based on a criterion for optimal statistical prediction. One measure, defined for homogeneous systems, does not distinguish between the static spatial inhomogeneity in the ordered phase and the dynamical inhomogeneity in the disordered phase. A modification in which complexities of individual nodes are calculated yields vanishing

  12. Refinement monoids, equidecomposability types, and boolean inverse semigroups

    CERN Document Server

    Wehrung, Friedrich

    2017-01-01

    Adopting a new universal algebraic approach, this book explores and consolidates the link between Tarski's classical theory of equidecomposability types monoids, abstract measure theory (in the spirit of Hans Dobbertin's work on monoid-valued measures on Boolean algebras) and the nonstable K-theory of rings. This is done via the study of a monoid invariant, defined on Boolean inverse semigroups, called the type monoid. The new techniques contrast with the currently available topological approaches. Many positive results, but also many counterexamples, are provided.

  13. Bell-Boole Inequality: Nonlocality or Probabilistic Incompatibility of Random Variables?

    Directory of Open Access Journals (Sweden)

    Andrei Khrennikov

    2008-03-01

    Full Text Available The main aim of this report is to inform the quantum information community about investigations on the problem of probabilistic compatibility of a family of random variables: a possibility to realize such a family on the basis of a single probability measure (to construct a single Kolmogorov probability space. These investigations were started hundred of years ago by J. Boole (who invented Boolean algebras. The complete solution of the problem was obtained by Soviet mathematician Vorobjev in 60th. Surprisingly probabilists and statisticians obtained inequalities for probabilities and correlations among which one can find the famous Bell’s inequality and its generalizations. Such inequalities appeared simply as constraints for probabilistic compatibility. In this framework one can not see a priori any link to such problems as nonlocality and “death of reality” which are typically linked to Bell’s type inequalities in physical literature. We analyze the difference between positions of mathematicians and quantum physicists. In particular, we found that one of the most reasonable explanations of probabilistic incompatibility is mixing in Bell’s type inequalities statistical data from a number of experiments performed under different experimental contexts.

  14. Equivalence Checking of Combinational Circuits using Boolean Expression Diagrams

    DEFF Research Database (Denmark)

    Hulgaard, Henrik; Williams, Poul Frederick; Andersen, Henrik Reif

    1999-01-01

    The combinational logic-level equivalence problem is to determine whether two given combinational circuits implement the same Boolean function. This problem arises in a number of CAD applications, for example when checking the correctness of incremental design changes (performed either manually...... or by a design automation tool).This paper introduces a data structure called Boolean Expression Diagrams (BEDs) and two algorithms for transforming a BED into a Reduced Ordered Binary Decision Diagram (OBDD). BEDs are capable of representing any Boolean circuit in linear space and can exploit structural...... similarities between the two circuits that are compared. These properties make BEDs suitable for verifying the equivalence of combinational circuits. BEDs can be seen as an intermediate representation between circuits (which are compact) and OBDDs (which are canonical).Based on a large number of combinational...

  15. Pipe fracture evaluations for leak-rate detection: Probabilistic models

    International Nuclear Information System (INIS)

    Rahman, S.; Wilkowski, G.; Ghadiali, N.

    1993-01-01

    This is the second in series of three papers generated from studies on nuclear pipe fracture evaluations for leak-rate detection. This paper focuses on the development of novel probabilistic models for stochastic performance evaluation of degraded nuclear piping systems. It was accomplished here in three distinct stages. First, a statistical analysis was conducted to characterize various input variables for thermo-hydraulic analysis and elastic-plastic fracture mechanics, such as material properties of pipe, crack morphology variables, and location of cracks found in nuclear piping. Second, a new stochastic model was developed to evaluate performance of degraded piping systems. It is based on accurate deterministic models for thermo-hydraulic and fracture mechanics analyses described in the first paper, statistical characterization of various input variables, and state-of-the-art methods of modem structural reliability theory. From this model. the conditional probability of failure as a function of leak-rate detection capability of the piping systems can be predicted. Third, a numerical example was presented to illustrate the proposed model for piping reliability analyses. Results clearly showed that the model provides satisfactory estimates of conditional failure probability with much less computational effort when compared with those obtained from Monte Carlo simulation. The probabilistic model developed in this paper will be applied to various piping in boiling water reactor and pressurized water reactor plants for leak-rate detection applications

  16. Probabilistic simulation of mesoscopic “Schrödinger cat” states

    Energy Technology Data Exchange (ETDEWEB)

    Opanchuk, B.; Rosales-Zárate, L.; Reid, M.D.; Drummond, P.D., E-mail: pdrummond@swin.edu.au

    2014-02-01

    We carry out probabilistic phase-space sampling of mesoscopic Schrödinger cat quantum states, demonstrating multipartite Bell violations for up to 60 qubits. We use states similar to those generated in photonic and ion-trap experiments. These results show that mesoscopic quantum superpositions are directly accessible to probabilistic sampling, and we analyze the properties of sampling errors. We also demonstrate dynamical simulation of super-decoherence in ion traps. Our computer simulations can be either exponentially faster or slower than experiment, depending on the correlations measured.

  17. Probabilistic risk assessment course documentation. Volume 5. System reliability and analysis techniques Session D - quantification

    International Nuclear Information System (INIS)

    Lofgren, E.V.

    1985-08-01

    This course in System Reliability and Analysis Techniques focuses on the probabilistic quantification of accident sequences and the link between accident sequences and consequences. Other sessions in this series focus on the quantification of system reliability and the development of event trees and fault trees. This course takes the viewpoint that event tree sequences or combinations of system failures and success are available and that Boolean equations for system fault trees have been developed and are available. 93 figs., 11 tabs

  18. Complexity classifications for different equivalence and audit problems for Boolean circuits

    OpenAIRE

    Böhler, Elmar; Creignou, Nadia; Galota, Matthias; Reith, Steffen; Schnoor, Henning; Vollmer, Heribert

    2010-01-01

    We study Boolean circuits as a representation of Boolean functions and conskier different equivalence, audit, and enumeration problems. For a number of restricted sets of gate types (bases) we obtain efficient algorithms, while for all other gate types we show these problems are at least NP-hard.

  19. Dependencies, human interactions and uncertainties in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.

    1990-01-01

    In the context of Probabilistic Safety Assessment (PSA), three areas were investigated in a 4-year Nordic programme: dependencies with special emphasis on common cause failures, human interactions and uncertainty aspects. The approach was centered around comparative analyses in form of Benchmark/Reference Studies and retrospective reviews. Weak points in available PSAs were identified and recommendations were made aiming at improving consistency of the PSAs. The sensitivity of PSA-results to basic assumptions was demonstrated and the sensitivity to data assignment and to choices of methods for analysis of selected topics was investigated. (author)

  20. BoolFilter: an R package for estimation and identification of partially-observed Boolean dynamical systems.

    Science.gov (United States)

    Mcclenny, Levi D; Imani, Mahdi; Braga-Neto, Ulisses M

    2017-11-25

    Gene regulatory networks govern the function of key cellular processes, such as control of the cell cycle, response to stress, DNA repair mechanisms, and more. Boolean networks have been used successfully in modeling gene regulatory networks. In the Boolean network model, the transcriptional state of each gene is represented by 0 (inactive) or 1 (active), and the relationship among genes is represented by logical gates updated at discrete time points. However, the Boolean gene states are never observed directly, but only indirectly and incompletely through noisy measurements based on expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays. The Partially-Observed Boolean Dynamical System (POBDS) signal model is distinct from other deterministic and stochastic Boolean network models in removing the requirement of a directly observable Boolean state vector and allowing uncertainty in the measurement process, addressing the scenario encountered in practice in transcriptomic analysis. BoolFilter is an R package that implements the POBDS model and associated algorithms for state and parameter estimation. It allows the user to estimate the Boolean states, network topology, and measurement parameters from time series of transcriptomic data using exact and approximated (particle) filters, as well as simulate the transcriptomic data for a given Boolean network model. Some of its infrastructure, such as the network interface, is the same as in the previously published R package for Boolean Networks BoolNet, which enhances compatibility and user accessibility to the new package. We introduce the R package BoolFilter for Partially-Observed Boolean Dynamical Systems (POBDS). The BoolFilter package provides a useful toolbox for the bioinformatics community, with state-of-the-art algorithms for simulation of time series transcriptomic data as well as the inverse process of system identification from data obtained with various expression

  1. 3D Boolean operations in virtual surgical planning.

    Science.gov (United States)

    Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun

    2017-10-01

    Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.

  2. Random networks of Boolean cellular automata

    Energy Technology Data Exchange (ETDEWEB)

    Miranda, Enrique [Comision Nacional de Energia Atomica, San Carlos de Bariloche (Argentina). Centro Atomico Bariloche

    1990-01-01

    Some recent results about random networks of Boolean automata -the Kauffman model- are reviewed. The structure of configuration space is explored. Ultrametricity between cycles is analyzed and the effects of noise in the dynamics are studied. (Author).

  3. Random networks of Boolean cellular automata

    International Nuclear Information System (INIS)

    Miranda, Enrique

    1990-01-01

    Some recent results about random networks of Boolean automata -the Kauffman model- are reviewed. The structure of configuration space is explored. Ultrametricity between cycles is analyzed and the effects of noise in the dynamics are studied. (Author)

  4. Vectorial Resilient PC(l) of Order k Boolean Functions from AG-Codes

    Institute of Scientific and Technical Information of China (English)

    Hao CHEN; Liang MA; Jianhua LI

    2011-01-01

    Propagation criteria and resiliency of vectorial Boolean functions are important for cryptographic purpose (see [1- 4, 7, 8, 10, 11, 16]). Kurosawa, Stoh [8] and Carlet [1]gave a construction of Boolean functions satisfying PC(l) of order k from binary linear or nonlinear codes. In this paper, the algebraic-geometric codes over GF(2m) are used to modify the Carlet and Kurosawa-Satoh's construction for giving vectorial resilient Boolean functions satisfying PC(l) of order k criterion. This new construction is compared with previously known results.

  5. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  6. Representing Boolean Functions by Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    A Boolean or discrete function can be represented by a decision tree. A compact form of decision tree named binary decision diagram or branching program is widely known in logic design [2, 40]. This representation is equivalent to other forms

  7. Boolean network identification from perturbation time series data combining dynamics abstraction and logic programming.

    Science.gov (United States)

    Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C

    2016-11-01

    Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Controllability and observability of Boolean networks arising from biology

    Science.gov (United States)

    Li, Rui; Yang, Meng; Chu, Tianguang

    2015-02-01

    Boolean networks are currently receiving considerable attention as a computational scheme for system level analysis and modeling of biological systems. Studying control-related problems in Boolean networks may reveal new insights into the intrinsic control in complex biological systems and enable us to develop strategies for manipulating biological systems using exogenous inputs. This paper considers controllability and observability of Boolean biological networks. We propose a new approach, which draws from the rich theory of symbolic computation, to solve the problems. Consequently, simple necessary and sufficient conditions for reachability, controllability, and observability are obtained, and algorithmic tests for controllability and observability which are based on the Gröbner basis method are presented. As practical applications, we apply the proposed approach to several different biological systems, namely, the mammalian cell-cycle network, the T-cell activation network, the large granular lymphocyte survival signaling network, and the Drosophila segment polarity network, gaining novel insights into the control and/or monitoring of the specific biological systems.

  9. The spruce budworm and forest: a qualitative comparison of ODE and Boolean models

    Directory of Open Access Journals (Sweden)

    Raina Robeva

    2016-01-01

    Full Text Available Boolean and polynomial models of biological systems have emerged recently as viable companions to differential equations models. It is not immediately clear however whether such models are capable of capturing the multi-stable behaviour of certain biological systems: this behaviour is often sensitive to changes in the values of the model parameters, while Boolean and polynomial models are qualitative in nature. In the past few years, Boolean models of gene regulatory systems have been shown to capture multi-stability at the molecular level, confirming that such models can be used to obtain information about the system’s qualitative dynamics when precise information regarding its parameters may not be available. In this paper, we examine Boolean approximations of a classical ODE model of budworm outbreaks in a forest and show that these models exhibit a qualitative behaviour consistent with that derived from the ODE models. In particular, we demonstrate that these models can capture the bistable nature of insect population outbreaks, thus showing that Boolean models can be successfully utilized beyond the molecular level.

  10. Steady state analysis of Boolean molecular network models via model reduction and computational algebra.

    Science.gov (United States)

    Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard

    2014-06-26

    A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for

  11. Effectiveness of Securities with Fuzzy Probabilistic Return

    Directory of Open Access Journals (Sweden)

    Krzysztof Piasecki

    2011-01-01

    Full Text Available The generalized fuzzy present value of a security is defined here as fuzzy valued utility of cash flow. The generalized fuzzy present value cannot depend on the value of future cash flow. There exists such a generalized fuzzy present value which is not a fuzzy present value in the sense given by some authors. If the present value is a fuzzy number and the future value is a random one, then the return rate is given as a probabilistic fuzzy subset on a real line. This kind of return rate is called a fuzzy probabilistic return. The main goal of this paper is to derive the family of effective securities with fuzzy probabilistic return. Achieving this goal requires the study of the basic parameters characterizing fuzzy probabilistic return. Therefore, fuzzy expected value and variance are determined for this case of return. These results are a starting point for constructing a three-dimensional image. The set of effective securities is introduced as the Pareto optimal set determined by the maximization of the expected return rate and minimization of the variance. Finally, the set of effective securities is distinguished as a fuzzy set. These results are obtained without the assumption that the distribution of future values is Gaussian. (original abstract

  12. Classical Boolean logic gates with quantum systems

    International Nuclear Information System (INIS)

    Renaud, N; Joachim, C

    2011-01-01

    An analytical method is proposed to implement any classical Boolean function in a small quantum system by taking the advantage of its electronic transport properties. The logical input, α = {α 1 , ..., α N }, is used to control well-identified parameters of the Hamiltonian of the system noted H 0 (α). The logical output is encoded in the tunneling current intensity passing through the quantum system when connected to conducting electrodes. It is demonstrated how to implement the six symmetric two-input/one-output Boolean functions in a quantum system. This system can be switched from one logic function to another by changing its structural parameters. The stability of the logic gates is discussed, perturbing the Hamiltonian with noise sources and studying the effect of decoherence.

  13. A utility theoretic view on probabilistic safety criteria

    International Nuclear Information System (INIS)

    Holmberg, J.E.

    1997-03-01

    A probabilistic safety criterion specifies the maximum acceptable hazard rates of various accidental consequences. Assuming that the criterion depends also on the benefit of the process to society and on the licensing time applied, we can regard such statements as preference relations. In this paper, a probabilistic safety criterion is interpreted to mean that if the accident hazard rate is higher than the accident hazard rate criterion, then the optimal stopping time of a hazardous process is shorter than the licensing time. This interpretation yields a condition for a feasible utility function. In particular, we derive such a condition for the parameters of a linear plus exponential utility function. (orig.) (12 refs.)

  14. When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.

    Science.gov (United States)

    Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko

    2015-08-01

    When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. E-Area LLWF Vadose Zone Model: Probabilistic Model for Estimating Subsided-Area Infiltration Rates

    Energy Technology Data Exchange (ETDEWEB)

    Dyer, J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-12-12

    A probabilistic model employing a Monte Carlo sampling technique was developed in Python to generate statistical distributions of the upslope-intact-area to subsided-area ratio (AreaUAi/AreaSAi) for closure cap subsidence scenarios that differ in assumed percent subsidence and the total number of intact plus subsided compartments. The plan is to use this model as a component in the probabilistic system model for the E-Area Performance Assessment (PA), contributing uncertainty in infiltration estimates.

  16. A Boolean Approach to Airline Business Model Innovation

    DEFF Research Database (Denmark)

    Hvass, Kristian Anders

    Research in business model innovation has identified its significance in creating a sustainable competitive advantage for a firm, yet there are few empirical studies identifying which combination of business model activities lead to success and therefore deserve innovative attention. This study...... analyzes the business models of North America low-cost carriers from 2001 to 2010 using a Boolean minimization algorithm to identify which combinations of business model activities lead to operational profitability. The research aim is threefold: complement airline literature in the realm of business model...... innovation, introduce Boolean minimization methods to the field, and propose alternative business model activities to North American carriers striving for positive operating results....

  17. On Kolmogorov's superpositions and Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  18. Evolutionary Algorithms for Boolean Queries Optimization

    Czech Academy of Sciences Publication Activity Database

    Húsek, Dušan; Snášel, Václav; Neruda, Roman; Owais, S.S.J.; Krömer, P.

    2006-01-01

    Roč. 3, č. 1 (2006), s. 15-20 ISSN 1790-0832 R&D Projects: GA AV ČR 1ET100300414 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary algorithms * genetic algorithms * information retrieval * Boolean query Subject RIV: BA - General Mathematics

  19. Practical algorithms for linear boolean-width

    NARCIS (Netherlands)

    ten Brinke, C.B.; van Houten, F.J.P.; Bodlaender, H.L.

    2015-01-01

    In this paper, we give a number of new exact algorithms and heuristics to compute linear boolean decompositions, and experimentally evaluate these algorithms. The experimental evaluation shows that significant improvements can be made with respect to running time without increasing the width of the

  20. Practical algorithms for linear Boolean-width

    NARCIS (Netherlands)

    ten Brinke, C.B.; van Houten, F.J.P.; Bodlaender, H.L.

    2015-01-01

    In this paper, we give a number of new exact algorithms and heuristics to compute linear boolean decompositions, and experimentally evaluate these algorithms. The experimental evaluation shows that significant improvements can be made with respect to running time without increasing the width of the

  1. Probabilistic estimation of residential air exchange rates for ...

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  2. Boolean Queries Optimization by Genetic Algorithms

    Czech Academy of Sciences Publication Activity Database

    Húsek, Dušan; Owais, S.S.J.; Krömer, P.; Snášel, Václav

    2005-01-01

    Roč. 15, - (2005), s. 395-409 ISSN 1210-0552 R&D Projects: GA AV ČR 1ET100300414 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary algorithms * genetic algorithms * genetic programming * information retrieval * Boolean query Subject RIV: BB - Applied Statistics, Operational Research

  3. Constrained mathematics evaluation in probabilistic logic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Arlin Cooper, J

    1998-06-01

    A challenging problem in mathematically processing uncertain operands is that constraints inherent in the problem definition can require computations that are difficult to implement. Examples of possible constraints are that the sum of the probabilities of partitioned possible outcomes must be one, and repeated appearances of the same variable must all have the identical value. The latter, called the 'repeated variable problem', will be addressed in this paper in order to show how interval-based probabilistic evaluation of Boolean logic expressions, such as those describing the outcomes of fault trees and event trees, can be facilitated in a way that can be readily implemented in software. We will illustrate techniques that can be used to transform complex constrained problems into trivial problems in most tree logic expressions, and into tractable problems in most other cases.

  4. A Probabilistic Framework for Security Scenarios with Dependent Actions

    NARCIS (Netherlands)

    Kordy, Barbara; Pouly, Marc; Schweizer, Patrick; Albert, Elvira; Sekereinsk, Emil

    2014-01-01

    This work addresses the growing need of performing meaningful probabilistic analysis of security. We propose a framework that integrates the graphical security modeling technique of attack–defense trees with probabilistic information expressed in terms of Bayesian networks. This allows us to perform

  5. Comparing Categorical and Probabilistic Fingerprint Evidence.

    Science.gov (United States)

    Garrett, Brandon; Mitchell, Gregory; Scurich, Nicholas

    2018-04-23

    Fingerprint examiners traditionally express conclusions in categorical terms, opining that impressions do or do not originate from the same source. Recently, probabilistic conclusions have been proposed, with examiners estimating the probability of a match between recovered and known prints. This study presented a nationally representative sample of jury-eligible adults with a hypothetical robbery case in which an examiner opined on the likelihood that a defendant's fingerprints matched latent fingerprints in categorical or probabilistic terms. We studied model language developed by the U.S. Defense Forensic Science Center to summarize results of statistical analysis of the similarity between prints. Participant ratings of the likelihood the defendant left prints at the crime scene and committed the crime were similar when exposed to categorical and strong probabilistic match evidence. Participants reduced these likelihoods when exposed to the weaker probabilistic evidence, but did not otherwise discriminate among the prints assigned different match probabilities. © 2018 American Academy of Forensic Sciences.

  6. Implementation of condition-dependent probabilistic risk assessment using surveillance data on passive components

    International Nuclear Information System (INIS)

    Lewandowski, Radoslaw; Denning, Richard; Aldemir, Tunc; Zhang, Jinsuo

    2016-01-01

    Highlights: • Condition-dependent probabilistic risk assessment (PRA). • Time-dependent characterization of plant-specific risk. • Containment bypass involving in secondary system piping and SCC in SG tubes. - Abstract: A great deal of surveillance data are collected for a nuclear power plant that reflect the changing condition of the plant as it ages. Although surveillance data are used to determine failure probabilities of active components for the plant’s probabilistic risk assessment (PRA) and to indicate the need for maintenance activities, they are not used in a structured manner to characterize the evolving risk of the plant. The present study explores the feasibility of using a condition-dependent PRA framework that takes a first principles approach to modeling the progression of degradation mechanisms to characterize evolving risk, periodically adapting the model to account for surveillance results. A case study is described involving a potential containment bypass accident sequence due to the progression of flow-accelerated corrosion in secondary system piping and stress corrosion cracking of steam generator tubes. In this sequence, a steam line break accompanied by failure to close of a main steam isolation valve results in depressurization of the steam generator and induces the rupture of one or more faulted steam generator tubes. The case study indicates that a condition-dependent PRA framework might be capable of providing early identification of degradation mechanisms important to plant risk.

  7. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2016-01-01

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters

  8. BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool

    Science.gov (United States)

    Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.

    2006-01-01

    BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…

  9. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  10. Quantum algorithms on Walsh transform and Hamming distance for Boolean functions

    Science.gov (United States)

    Xie, Zhengwei; Qiu, Daowen; Cai, Guangya

    2018-06-01

    Walsh spectrum or Walsh transform is an alternative description of Boolean functions. In this paper, we explore quantum algorithms to approximate the absolute value of Walsh transform W_f at a single point z0 (i.e., |W_f(z0)|) for n-variable Boolean functions with probability at least 8/π 2 using the number of O(1/|W_f(z_{0)|ɛ }) queries, promised that the accuracy is ɛ , while the best known classical algorithm requires O(2n) queries. The Hamming distance between Boolean functions is used to study the linearity testing and other important problems. We take advantage of Walsh transform to calculate the Hamming distance between two n-variable Boolean functions f and g using O(1) queries in some cases. Then, we exploit another quantum algorithm which converts computing Hamming distance between two Boolean functions to quantum amplitude estimation (i.e., approximate counting). If Ham(f,g)=t≠0, we can approximately compute Ham( f, g) with probability at least 2/3 by combining our algorithm and {Approx-Count(f,ɛ ) algorithm} using the expected number of Θ( √{N/(\\lfloor ɛ t\\rfloor +1)}+√{t(N-t)}/\\lfloor ɛ t\\rfloor +1) queries, promised that the accuracy is ɛ . Moreover, our algorithm is optimal, while the exact query complexity for the above problem is Θ(N) and the query complexity with the accuracy ɛ is O(1/ɛ 2N/(t+1)) in classical algorithm, where N=2n. Finally, we present three exact quantum query algorithms for two promise problems on Hamming distance using O(1) queries, while any classical deterministic algorithm solving the problem uses Ω(2n) queries.

  11. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  12. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Dejan Pecevski

    2011-12-01

    Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  13. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-12-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  14. Valid Probabilistic Predictions for Ginseng with Venn Machines Using Electronic Nose

    Directory of Open Access Journals (Sweden)

    You Wang

    2016-07-01

    Full Text Available In the application of electronic noses (E-noses, probabilistic prediction is a good way to estimate how confident we are about our prediction. In this work, a homemade E-nose system embedded with 16 metal-oxide semi-conductive gas sensors was used to discriminate nine kinds of ginsengs of different species or production places. A flexible machine learning framework, Venn machine (VM was introduced to make probabilistic predictions for each prediction. Three Venn predictors were developed based on three classical probabilistic prediction methods (Platt’s method, Softmax regression and Naive Bayes. Three Venn predictors and three classical probabilistic prediction methods were compared in aspect of classification rate and especially the validity of estimated probability. A best classification rate of 88.57% was achieved with Platt’s method in offline mode, and the classification rate of VM-SVM (Venn machine based on Support Vector Machine was 86.35%, just 2.22% lower. The validity of Venn predictors performed better than that of corresponding classical probabilistic prediction methods. The validity of VM-SVM was superior to the other methods. The results demonstrated that Venn machine is a flexible tool to make precise and valid probabilistic prediction in the application of E-nose, and VM-SVM achieved the best performance for the probabilistic prediction of ginseng samples.

  15. Document Ranking in E-Extended Boolean Logic

    Czech Academy of Sciences Publication Activity Database

    Holub, M.; Húsek, Dušan; Pokorný, J.

    1996-01-01

    Roč. 4, č. 7 (1996), s. 3-17 ISSN 1310-0513. [Annual Colloquium on IR Research /19./. Aberdeen, 08.04.1997-09.04.1997] R&D Projects: GA ČR GA102/94/0728 Keywords : information retrieval * document ranking * extended Boolean logic

  16. Parallel object-oriented term rewriting : the booleans

    NARCIS (Netherlands)

    Rodenburg, P.H.; Vrancken, J.L.M.

    As a first case study in parallel object-oriented term rewriting, we give two implementations of term rewriting algorithms for boolean terms, using the parallel object-oriented features of the language Pool-T. The term rewriting systems are specified in the specification formalism

  17. Boolean representations of simplicial complexes and matroids

    CERN Document Server

    Rhodes, John

    2015-01-01

    This self-contained monograph explores a new theory centered around boolean representations of simplicial complexes leading to a new class of complexes featuring matroids as central to the theory. The book illustrates these new tools to study the classical theory of matroids as well as their important geometric connections. Moreover, many geometric and topological features of the theory of matroids find their counterparts in this extended context.   Graduate students and researchers working in the areas of combinatorics, geometry, topology, algebra and lattice theory will find this monograph appealing due to the wide range of new problems raised by the theory. Combinatorialists will find this extension of the theory of matroids useful as it opens new lines of research within and beyond matroids. The geometric features and geometric/topological applications will appeal to geometers. Topologists who desire to perform algebraic topology computations will appreciate the algorithmic potential of boolean represent...

  18. Circulant Matrices and Affine Equivalence of Monomial Rotation Symmetric Boolean Functions

    Science.gov (United States)

    2015-01-01

    degree of the MRS is, we have a similar result as [40, Theorem 1.1] for n = 4p (p prime), or squarefree integers n, which along with our Theorem 5.2...Boolean functions: Construction and analysis in terms of algebraic immunity, in: H. Gilbert, H. Handschuh (Eds.), Fast Software Encryption, in: LNCS...vol. 1403, Springer-Verlag, 1998, pp. 475–488. [20] J.E. Fuller, Analysis of affine equivalent Boolean functions for cryptography (Ph.D. thesis

  19. Use of probabilistic relational model (PRM) for dependability analysis of complex systems

    OpenAIRE

    Medina-Oliva , Gabriela; Weber , Philippe; Levrat , Eric; Iung , Benoît

    2010-01-01

    International audience; This paper proposes a methodology to develop a aided decision-making tool for assessing the dependability and performances (i.e. reliability) of an industrial system. This tool is built on a model based on a new formalism, called the probabilistic relational model (PRM) which is adapted to deal with large and complex systems. The model is formalized from functional, dysfunctional and informational studies of the technical industrial systems. An application of this meth...

  20. PATHLOGIC-S: a scalable Boolean framework for modelling cellular signalling.

    Directory of Open Access Journals (Sweden)

    Liam G Fearnley

    Full Text Available Curated databases of signal transduction have grown to describe several thousand reactions, and efficient use of these data requires the development of modelling tools to elucidate and explore system properties. We present PATHLOGIC-S, a Boolean specification for a signalling model, with its associated GPL-licensed implementation using integer programming techniques. The PATHLOGIC-S specification has been designed to function on current desktop workstations, and is capable of providing analyses on some of the largest currently available datasets through use of Boolean modelling techniques to generate predictions of stable and semi-stable network states from data in community file formats. PATHLOGIC-S also addresses major problems associated with the presence and modelling of inhibition in Boolean systems, and reduces logical incoherence due to common inhibitory mechanisms in signalling systems. We apply this approach to signal transduction networks including Reactome and two pathways from the Panther Pathways database, and present the results of computations on each along with a discussion of execution time. A software implementation of the framework and model is freely available under a GPL license.

  1. Systems analysis approach to probabilistic modeling of fault trees

    International Nuclear Information System (INIS)

    Bartholomew, R.J.; Qualls, C.R.

    1985-01-01

    A method of probabilistic modeling of fault tree logic combined with stochastic process theory (Markov modeling) has been developed. Systems are then quantitatively analyzed probabilistically in terms of their failure mechanisms including common cause/common mode effects and time dependent failure and/or repair rate effects that include synergistic and propagational mechanisms. The modeling procedure results in a state vector set of first order, linear, inhomogeneous, differential equations describing the time dependent probabilities of failure described by the fault tree. The solutions of this Failure Mode State Variable (FMSV) model are cumulative probability distribution functions of the system. A method of appropriate synthesis of subsystems to form larger systems is developed and applied to practical nuclear power safety systems

  2. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor

    2016-07-28

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters characterizing both time (in the worst- and average-case) and space complexity of decision trees, i.e., depth, total path length (average depth), and number of nodes. We have created tools based on extensions of dynamic programming to study totally optimal trees. These tools are applicable to both exact and approximate decision trees, and allow us to make multi-stage optimization of decision trees relative to different parameters and to count the number of optimal trees. Based on the experimental results we have formulated the following hypotheses (and subsequently proved): for almost all Boolean functions there exist totally optimal decision trees (i) relative to the depth and number of nodes, and (ii) relative to the depth and average depth.

  3. Model checking optimal finite-horizon control for probabilistic gene regulatory networks.

    Science.gov (United States)

    Wei, Ou; Guo, Zonghao; Niu, Yun; Liao, Wenyuan

    2017-12-14

    Probabilistic Boolean networks (PBNs) have been proposed for analyzing external control in gene regulatory networks with incorporation of uncertainty. A context-sensitive PBN with perturbation (CS-PBNp), extending a PBN with context-sensitivity to reflect the inherent biological stability and random perturbations to express the impact of external stimuli, is considered to be more suitable for modeling small biological systems intervened by conditions from the outside. In this paper, we apply probabilistic model checking, a formal verification technique, to optimal control for a CS-PBNp that minimizes the expected cost over a finite control horizon. We first describe a procedure of modeling a CS-PBNp using the language provided by a widely used probabilistic model checker PRISM. We then analyze the reward-based temporal properties and the computation in probabilistic model checking; based on the analysis, we provide a method to formulate the optimal control problem as minimum reachability reward properties. Furthermore, we incorporate control and state cost information into the PRISM code of a CS-PBNp such that automated model checking a minimum reachability reward property on the code gives the solution to the optimal control problem. We conduct experiments on two examples, an apoptosis network and a WNT5A network. Preliminary experiment results show the feasibility and effectiveness of our approach. The approach based on probabilistic model checking for optimal control avoids explicit computation of large-size state transition relations associated with PBNs. It enables a natural depiction of the dynamics of gene regulatory networks, and provides a canonical form to formulate optimal control problems using temporal properties that can be automated solved by leveraging the analysis power of underlying model checking engines. This work will be helpful for further utilization of the advances in formal verification techniques in system biology.

  4. A parallel attractor-finding algorithm based on Boolean satisfiability for genetic regulatory networks.

    Directory of Open Access Journals (Sweden)

    Wensheng Guo

    Full Text Available In biological systems, the dynamic analysis method has gained increasing attention in the past decade. The Boolean network is the most common model of a genetic regulatory network. The interactions of activation and inhibition in the genetic regulatory network are modeled as a set of functions of the Boolean network, while the state transitions in the Boolean network reflect the dynamic property of a genetic regulatory network. A difficult problem for state transition analysis is the finding of attractors. In this paper, we modeled the genetic regulatory network as a Boolean network and proposed a solving algorithm to tackle the attractor finding problem. In the proposed algorithm, we partitioned the Boolean network into several blocks consisting of the strongly connected components according to their gradients, and defined the connection between blocks as decision node. Based on the solutions calculated on the decision nodes and using a satisfiability solving algorithm, we identified the attractors in the state transition graph of each block. The proposed algorithm is benchmarked on a variety of genetic regulatory networks. Compared with existing algorithms, it achieved similar performance on small test cases, and outperformed it on larger and more complex ones, which happens to be the trend of the modern genetic regulatory network. Furthermore, while the existing satisfiability-based algorithms cannot be parallelized due to their inherent algorithm design, the proposed algorithm exhibits a good scalability on parallel computing architectures.

  5. Boolean Models of Biological Processes Explain Cascade-Like Behavior.

    Science.gov (United States)

    Chen, Hao; Wang, Guanyu; Simha, Rahul; Du, Chenghang; Zeng, Chen

    2016-01-29

    Biological networks play a key role in determining biological function and therefore, an understanding of their structure and dynamics is of central interest in systems biology. In Boolean models of such networks, the status of each molecule is either "on" or "off" and along with the molecules interact with each other, their individual status changes from "on" to "off" or vice-versa and the system of molecules in the network collectively go through a sequence of changes in state. This sequence of changes is termed a biological process. In this paper, we examine the common perception that events in biomolecular networks occur sequentially, in a cascade-like manner, and ask whether this is likely to be an inherent property. In further investigations of the budding and fission yeast cell-cycle, we identify two generic dynamical rules. A Boolean system that complies with these rules will automatically have a certain robustness. By considering the biological requirements in robustness and designability, we show that those Boolean dynamical systems, compared to an arbitrary dynamical system, statistically present the characteristics of cascadeness and sequentiality, as observed in the budding and fission yeast cell- cycle. These results suggest that cascade-like behavior might be an intrinsic property of biological processes.

  6. A complexity theory based on Boolean algebra

    DEFF Research Database (Denmark)

    Skyum, Sven; Valiant, Leslie

    1985-01-01

    A projection of a Boolean function is a function obtained by substituting for each of its variables a variable, the negation of a variable, or a constant. Reducibilities among computational problems under this relation of projection are considered. It is shown that much of what is of everyday rel...

  7. Improving the quantum cost of reversible Boolean functions using reorder algorithm

    Science.gov (United States)

    Ahmed, Taghreed; Younes, Ahmed; Elsayed, Ashraf

    2018-05-01

    This paper introduces a novel algorithm to synthesize a low-cost reversible circuits for any Boolean function with n inputs represented as a Positive Polarity Reed-Muller expansion. The proposed algorithm applies a predefined rules to reorder the terms in the function to minimize the multi-calculation of common parts of the Boolean function to decrease the quantum cost of the reversible circuit. The paper achieves a decrease in the quantum cost and/or the circuit length, on average, when compared with relevant work in the literature.

  8. On the Boolean extension of the Birnbaum importance to non-coherent systems

    International Nuclear Information System (INIS)

    Aliee, Hananeh; Borgonovo, Emanuele; Glaß, Michael; Teich, Jürgen

    2017-01-01

    The Birnbaum importance measure plays a central role in reliability analysis. It has initially been introduced for coherent systems, where several of its properties hold and where its computation is straightforward. This work introduces a Boolean expression for the notion of criticality that allows the seamless extension of the Birnbaum importance to non-coherent systems. As a key feature, the novel definition makes the computation and encoding straightforward with well-established techniques such as Binary Decision Diagrams (BDDs) or Fault Trees (FTs). Several examples and a case study illustrate the findings. - Highlights: • We propose a Boolean expression for the notion of criticality in coherent and non-coherent systems. • The notion is connected with the Birnbaum importance measure. • The connection with Andrew's and Beeson extension is discussed. • The Boolean expression allows straightforward encoding in Binary Decision Diagrams.

  9. Adaptive Rates of High-Spectral-Efficiency WDM/SDM Channels Using PDM-1024-QAM Probabilistic Shaping

    DEFF Research Database (Denmark)

    Hu, Hao; Yankov, Metodi Plamenov; Da Ros, Francesco

    2017-01-01

    We demonstrate adaptive rates and spectral efficiencies in WDM/SDM transmission using probabilistically shaped PDM-1024-QAM signals, achieving up to 7-Tbit/s data rates per spatial-superchannel and up to 297.8-bit/s/Hz aggregate spectral efficiency using a 30-core fiber on 12.5 and 25GHz WDM grids...

  10. Acoustic logic gates and Boolean operation based on self-collimating acoustic beams

    International Nuclear Information System (INIS)

    Zhang, Ting; Xu, Jian-yi; Cheng, Ying; Liu, Xiao-jun; Guo, Jian-zhong

    2015-01-01

    The reveal of self-collimation effect in two-dimensional (2D) photonic or acoustic crystals has opened up possibilities for signal manipulation. In this paper, we have proposed acoustic logic gates based on the linear interference of self-collimated beams in 2D sonic crystals (SCs) with line-defects. The line defects on the diagonal of the 2D square SCs are actually functioning as a 3 dB splitter. By adjusting the phase difference between two input signals, the basic Boolean logic functions such as XOR, OR, AND, and NOT are achieved both theoretically and experimentally. Due to the non-diffracting property of self-collimation beams, more complex Boolean logic and algorithms such as NAND, NOR, and XNOR can be realized by cascading the basic logic gates. The achievement of acoustic logic gates and Boolean operation provides a promising approach for acoustic signal computing and manipulations

  11. Probabilistic Graph Layout for Uncertain Network Visualization.

    Science.gov (United States)

    Schulz, Christoph; Nocaj, Arlind; Goertler, Jochen; Deussen, Oliver; Brandes, Ulrik; Weiskopf, Daniel

    2017-01-01

    We present a novel uncertain network visualization technique based on node-link diagrams. Nodes expand spatially in our probabilistic graph layout, depending on the underlying probability distributions of edges. The visualization is created by computing a two-dimensional graph embedding that combines samples from the probabilistic graph. A Monte Carlo process is used to decompose a probabilistic graph into its possible instances and to continue with our graph layout technique. Splatting and edge bundling are used to visualize point clouds and network topology. The results provide insights into probability distributions for the entire network-not only for individual nodes and edges. We validate our approach using three data sets that represent a wide range of network types: synthetic data, protein-protein interactions from the STRING database, and travel times extracted from Google Maps. Our approach reveals general limitations of the force-directed layout and allows the user to recognize that some nodes of the graph are at a specific position just by chance.

  12. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    Science.gov (United States)

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  13. Boolean network model for cancer pathways: predicting carcinogenesis and targeted therapy outcomes.

    Directory of Open Access Journals (Sweden)

    Herman F Fumiã

    Full Text Available A Boolean dynamical system integrating the main signaling pathways involved in cancer is constructed based on the currently known protein-protein interaction network. This system exhibits stationary protein activation patterns--attractors--dependent on the cell's microenvironment. These dynamical attractors were determined through simulations and their stabilities against mutations were tested. In a higher hierarchical level, it was possible to group the network attractors into distinct cell phenotypes and determine driver mutations that promote phenotypic transitions. We find that driver nodes are not necessarily central in the network topology, but at least they are direct regulators of central components towards which converge or through which crosstalk distinct cancer signaling pathways. The predicted drivers are in agreement with those pointed out by diverse census of cancer genes recently performed for several human cancers. Furthermore, our results demonstrate that cell phenotypes can evolve towards full malignancy through distinct sequences of accumulated mutations. In particular, the network model supports routes of carcinogenesis known for some tumor types. Finally, the Boolean network model is employed to evaluate the outcome of molecularly targeted cancer therapies. The major find is that monotherapies were additive in their effects and that the association of targeted drugs is necessary for cancer eradication.

  14. An Efficient Algorithm for Computing Attractors of Synchronous And Asynchronous Boolean Networks

    Science.gov (United States)

    Zheng, Desheng; Yang, Guowu; Li, Xiaoyu; Wang, Zhicai; Liu, Feng; He, Lei

    2013-01-01

    Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD), we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly faster in computing attractors for empirical experimental systems. Availability The software package is available at https://sites.google.com/site/desheng619/download. PMID:23585840

  15. On the Road to Genetic Boolean Matrix Factorization

    Czech Academy of Sciences Publication Activity Database

    Snášel, V.; Platoš, J.; Krömer, P.; Húsek, Dušan; Frolov, A.

    2007-01-01

    Roč. 17, č. 6 (2007), s. 675-688 ISSN 1210-0552 Institutional research plan: CEZ:AV0Z10300504 Keywords : data mining * genetic algorithms * Boolean factorization * binary data * machine learning * feature extraction Subject RIV: IN - Informatics, Computer Science Impact factor: 0.280, year: 2007

  16. Probabilistic site dependent design spectra for a NPP

    International Nuclear Information System (INIS)

    Chavez, M.; Arroyo, M.; Romo, M.P.

    1985-01-01

    A methodology is proposed to compute the design spectra for a NPP site. Near field earthquakes are included by using an appropriately scaled sample of response spectra. Site effects are considered through a probabilistic site response analysis in the frequency domain which considers nonlinear behaviour of soils. The uncertainties of the soil shear modulus, G, are introduced by using Rosenblueth's point estimates. Strong motion duration is treated by using sensitivity analysis. The procedure is applied to a NPP site and the results are: a) the USNCR R.G.1.60 underestimate the spectral amplitudes for frequencies of interest; b) the omission of the uncertainties on the G leads to under or over-estimate the spectral amplitudes at certain frequency bands; c) the effect of considering the actual strong motion duration instead of an average value is to reduce the peak spectral amplitudes by a ten per cent. (orig.)

  17. Comparison of Boolean analysis and standard phylogenetic methods using artificially evolved and natural mt-tRNA sequences from great apes.

    Science.gov (United States)

    Ari, Eszter; Ittzés, Péter; Podani, János; Thi, Quynh Chi Le; Jakó, Eena

    2012-04-01

    Boolean analysis (or BOOL-AN; Jakó et al., 2009. BOOL-AN: A method for comparative sequence analysis and phylogenetic reconstruction. Mol. Phylogenet. Evol. 52, 887-97.), a recently developed method for sequence comparison uses the Iterative Canonical Form of Boolean functions. It considers sequence information in a way entirely different from standard phylogenetic methods (i.e. Maximum Parsimony, Maximum-Likelihood, Neighbor-Joining, and Bayesian analysis). The performance and reliability of Boolean analysis were tested and compared with the standard phylogenetic methods, using artificially evolved - simulated - nucleotide sequences and the 22 mitochondrial tRNA genes of the great apes. At the outset, we assumed that the phylogeny of Hominidae is generally well established, and the guide tree of artificial sequence evolution can also be used as a benchmark. These offer a possibility to compare and test the performance of different phylogenetic methods. Trees were reconstructed by each method from 2500 simulated sequences and 22 mitochondrial tRNA sequences. We also introduced a special re-sampling method for Boolean analysis on permuted sequence sites, the P-BOOL-AN procedure. Considering the reliability values (branch support values of consensus trees and Robinson-Foulds distances) we used for simulated sequence trees produced by different phylogenetic methods, BOOL-AN appeared as the most reliable method. Although the mitochondrial tRNA sequences of great apes are relatively short (59-75 bases long) and the ratio of their constant characters is about 75%, BOOL-AN, P-BOOL-AN and the Bayesian approach produced the same tree-topology as the established phylogeny, while the outcomes of Maximum Parsimony, Maximum-Likelihood and Neighbor-Joining methods were equivocal. We conclude that Boolean analysis is a promising alternative to existing methods of sequence comparison for phylogenetic reconstruction and congruence analysis. Copyright © 2012 Elsevier Inc. All

  18. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors

    Directory of Open Access Journals (Sweden)

    Anxing Shan

    2017-05-01

    Full Text Available Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs. Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.

  19. A transition calculus for Boolean functions. [logic circuit analysis

    Science.gov (United States)

    Tucker, J. H.; Bennett, A. W.

    1974-01-01

    A transition calculus is presented for analyzing the effect of input changes on the output of logic circuits. The method is closely related to the Boolean difference, but it is more powerful. Both differentiation and integration are considered.

  20. Elements of Boolean-Valued Dempster-Shafer Theory

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    2000-01-01

    Roč. 10, č. 5 (2000), s. 825-835 ISSN 1210-0552. [SOFSEM 2000 Workshop on Soft Computing. Milovy, 27.11.2000-28.11.2000] R&D Projects: GA ČR GA201/00/1489 Institutional research plan: AV0Z1030915 Keywords : Boolean algebra * belief function * Dempster-Shafer theory * Dempster combination rule * nonspecifity degree Subject RIV: BA - General Mathematics

  1. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.

  2. Probabilistic reasoning with graphical security models

    NARCIS (Netherlands)

    Kordy, Barbara; Pouly, Marc; Schweitzer, Patrick

    This work provides a computational framework for meaningful probabilistic evaluation of attack–defense scenarios involving dependent actions. We combine the graphical security modeling technique of attack–defense trees with probabilistic information expressed in terms of Bayesian networks. In order

  3. Detecting small attractors of large Boolean networks by function-reduction-based strategy.

    Science.gov (United States)

    Zheng, Qiben; Shen, Liangzhong; Shang, Xuequn; Liu, Wenbin

    2016-04-01

    Boolean networks (BNs) are widely used to model gene regulatory networks and to design therapeutic intervention strategies to affect the long-term behaviour of systems. A central aim of Boolean-network analysis is to find attractors that correspond to various cellular states, such as cell types or the stage of cell differentiation. This problem is NP-hard and various algorithms have been used to tackle it with considerable success. The idea is that a singleton attractor corresponds to n consistent subsequences in the truth table. To find these subsequences, the authors gradually reduce the entire truth table of Boolean functions by extending a partial gene activity profile (GAP). Not only does this process delete inconsistent subsequences in truth tables, it also directly determines values for some nodes not extended, which means it can abandon the partial GAPs that cannot lead to an attractor as early as possible. The results of simulation show that the proposed algorithm can detect small attractors with length p = 4 in BNs of up to 200 nodes with average indegree K = 2.

  4. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  5. Free Boolean algebras over unions of two well orderings

    Czech Academy of Sciences Publication Activity Database

    Bonnet, R.; Faouzi, L.; Kubiś, Wieslaw

    2009-01-01

    Roč. 156, č. 7 (2009), s. 1177-1185 ISSN 0166-8641 Institutional research plan: CEZ:AV0Z10190503 Keywords : Well quasi orderings * Poset algebras * Superatomic Boolean algebras * Compact distributive lattices Subject RIV: BA - General Mathematics Impact factor: 0.441, year: 2009

  6. Assessment of Electronic Circuits Reliability Using Boolean Truth Table Modeling Method

    International Nuclear Information System (INIS)

    EI-Shanshoury, A.I.

    2011-01-01

    This paper explores the use of Boolean Truth Table modeling Method (BTTM) in the analysis of qualitative data. It is widely used in certain fields especially in the fields of electrical and electronic engineering. Our work focuses on the evaluation of power supply circuit reliability using (BTTM) which involves systematic attempts to falsify and identify hypotheses on the basis of truth tables constructed from qualitative data. Reliability parameters such as the system's failure rates for the power supply case study are estimated. All possible state combinations (operating and failed states) of the major components in the circuit were listed and their effects on overall system were studied

  7. Modelling probabilistic fatigue crack propagation rates for a mild structural steel

    Directory of Open Access Journals (Sweden)

    J.A.F.O. Correia

    2015-01-01

    Full Text Available A class of fatigue crack growth models based on elastic–plastic stress–strain histories at the crack tip region and local strain-life damage models have been proposed in literature. The fatigue crack growth is regarded as a process of continuous crack initializations over successive elementary material blocks, which may be governed by smooth strain-life damage data. Some approaches account for the residual stresses developing at the crack tip in the actual crack driving force assessment, allowing mean stresses and loading sequential effects to be modelled. An extension of the fatigue crack propagation model originally proposed by Noroozi et al. (2005 to derive probabilistic fatigue crack propagation data is proposed, in particular concerning the derivation of probabilistic da/dN-ΔK-R fields. The elastic-plastic stresses at the vicinity of the crack tip, computed using simplified formulae, are compared with the stresses computed using an elasticplastic finite element analyses for specimens considered in the experimental program proposed to derive the fatigue crack propagation data. Using probabilistic strain-life data available for the S355 structural mild steel, probabilistic crack propagation fields are generated, for several stress ratios, and compared with experimental fatigue crack propagation data. A satisfactory agreement between the predicted probabilistic fields and experimental data is observed.

  8. Perceptual Decision-Making as Probabilistic Inference by Neural Sampling.

    Science.gov (United States)

    Haefner, Ralf M; Berkes, Pietro; Fiser, József

    2016-05-04

    We address two main challenges facing systems neuroscience today: understanding the nature and function of cortical feedback between sensory areas and of correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make testable predictions for the influence of feedback signals on early sensory representations. Applying our framework to a two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task dependence of neural response correlations and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes new predictions and characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It demonstrates a normative way to integrate sensory and cognitive components into physiologically testable models of perceptual decision-making. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  10. Probabilistic confidence for decisions based on uncertain reliability estimates

    Science.gov (United States)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  11. A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva

    2018-03-01

    The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.

  12. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    International Nuclear Information System (INIS)

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; Mayo, Jackson R.; Armstrong, Robert C.

    2016-01-01

    Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.

  13. Boolean network representation of contagion dynamics during a financial crisis

    Science.gov (United States)

    Caetano, Marco Antonio Leonel; Yoneyama, Takashi

    2015-01-01

    This work presents a network model for representation of the evolution of certain patterns of economic behavior. More specifically, after representing the agents as points in a space in which each dimension associated to a relevant economic variable, their relative "motions" that can be either stationary or discordant, are coded into a boolean network. Patterns with stationary averages indicate the maintenance of status quo, whereas discordant patterns represent aggregation of new agent into the cluster or departure from the former policies. The changing patterns can be embedded into a network representation, particularly using the concept of autocatalytic boolean networks. As a case study, the economic tendencies of the BRIC countries + Argentina were studied. Although Argentina is not included in the cluster formed by BRIC countries, it tends to follow the BRIC members because of strong commercial ties.

  14. Probabilistic logics and probabilistic networks

    CERN Document Server

    Haenni, Rolf; Wheeler, Gregory; Williamson, Jon; Andrews, Jill

    2014-01-01

    Probabilistic Logic and Probabilistic Networks presents a groundbreaking framework within which various approaches to probabilistic logic naturally fit. Additionally, the text shows how to develop computationally feasible methods to mesh with this framework.

  15. Finite size effects and symmetry breaking in the evolution of networks of competing Boolean nodes

    International Nuclear Information System (INIS)

    Liu, M; Bassler, K E

    2011-01-01

    Finite size effects on the evolutionary dynamics of Boolean networks are analyzed. In the model considered, Boolean networks evolve via a competition between nodes that punishes those in the majority. Previous studies have found that large networks evolve to a statistical steady state that is both critical and highly canalized, and that the evolution of canalization, which is a form of robustness found in genetic regulatory networks, is associated with a particular symmetry of the evolutionary dynamics. Here, it is found that finite size networks evolve in a fundamentally different way than infinitely large networks do. The symmetry of the evolutionary dynamics of infinitely large networks that selects for canalizing Boolean functions is broken in the evolutionary dynamics of finite size networks. In finite size networks, there is an additional selection for input-inverting Boolean functions that output a value opposite to the majority of input values. The reason for the symmetry breaking in the evolutionary dynamics is found to be due to the need for nodes in finite size networks to behave differently in order to cooperate so that the system collectively performs as efficiently as possible. The results suggest that both finite size effects and symmetry are fundamental for understanding the evolution of real-world complex networks, including genetic regulatory networks.

  16. Probabilistic Role Models and the Guarded Fragment

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2004-01-01

    We propose a uniform semantic framework for interpreting probabilistic concept subsumption and probabilistic role quantification through statistical sampling distributions. This general semantic principle serves as the foundation for the development of a probabilistic version of the guarded fragm...... fragment of first-order logic. A characterization of equivalence in that logic in terms of bisimulations is given....

  17. Probabilistic role models and the guarded fragment

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    We propose a uniform semantic framework for interpreting probabilistic concept subsumption and probabilistic role quantification through statistical sampling distributions. This general semantic principle serves as the foundation for the development of a probabilistic version of the guarded fragm...... fragment of first-order logic. A characterization of equivalence in that logic in terms of bisimulations is given....

  18. Exploring candidate biological functions by Boolean Function Networks for Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Maria Simak

    Full Text Available The great amount of gene expression data has brought a big challenge for the discovery of Gene Regulatory Network (GRN. For network reconstruction and the investigation of regulatory relations, it is desirable to ensure directness of links between genes on a map, infer their directionality and explore candidate biological functions from high-throughput transcriptomic data. To address these problems, we introduce a Boolean Function Network (BFN model based on techniques of hidden Markov model (HMM, likelihood ratio test and Boolean logic functions. BFN consists of two consecutive tests to establish links between pairs of genes and check their directness. We evaluate the performance of BFN through the application to S. cerevisiae time course data. BFN produces regulatory relations which show consistency with succession of cell cycle phases. Furthermore, it also improves sensitivity and specificity when compared with alternative methods of genetic network reverse engineering. Moreover, we demonstrate that BFN can provide proper resolution for GO enrichment of gene sets. Finally, the Boolean functions discovered by BFN can provide useful insights for the identification of control mechanisms of regulatory processes, which is the special advantage of the proposed approach. In combination with low computational complexity, BFN can serve as an efficient screening tool to reconstruct genes relations on the whole genome level. In addition, the BFN approach is also feasible to a wide range of time course datasets.

  19. Biasing transition rate method based on direct MC simulation for probabilistic safety assessment

    Institute of Scientific and Technical Information of China (English)

    Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang

    2017-01-01

    Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.

  20. Ordinary differential equations and Boolean networks in application to modelling of 6-mercaptopurine metabolism.

    Science.gov (United States)

    Lavrova, Anastasia I; Postnikov, Eugene B; Zyubin, Andrey Yu; Babak, Svetlana V

    2017-04-01

    We consider two approaches to modelling the cell metabolism of 6-mercaptopurine, one of the important chemotherapy drugs used for treating acute lymphocytic leukaemia: kinetic ordinary differential equations, and Boolean networks supplied with one controlling node, which takes continual values. We analyse their interplay with respect to taking into account ATP concentration as a key parameter of switching between different pathways. It is shown that the Boolean networks, which allow avoiding the complexity of general kinetic modelling, preserve the possibility of reproducing the principal switching mechanism.

  1. Boolean models of biosurfactants production in Pseudomonas fluorescens.

    Directory of Open Access Journals (Sweden)

    Adrien Richard

    Full Text Available Cyclolipopeptides (CLPs are biosurfactants produced by numerous Pseudomonas fluorescens strains. CLP production is known to be regulated at least by the GacA/GacS two-component pathway, but the full regulatory network is yet largely unknown. In the clinical strain MFN1032, CLP production is abolished by a mutation in the phospholipase C gene (plcC and not restored by plcC complementation. Their production is also subject to phenotypic variation. We used a modelling approach with Boolean networks, which takes into account all these observations concerning CLP production without any assumption on the topology of the considered network. Intensive computation yielded numerous models that satisfy these properties. All models minimizing the number of components point to a bistability in CLP production, which requires the presence of a yet unknown key self-inducible regulator. Furthermore, all suggest that a set of yet unexplained phenotypic variants might also be due to this epigenetic switch. The simplest of these Boolean networks was used to propose a biological regulatory network for CLP production. This modelling approach has allowed a possible regulation to be unravelled and an unusual behaviour of CLP production in P. fluorescens to be explained.

  2. Probabilistic Structural Analysis Program

    Science.gov (United States)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  3. Identification of control targets in Boolean molecular network models via computational algebra.

    Science.gov (United States)

    Murrugarra, David; Veliz-Cuba, Alan; Aguilar, Boris; Laubenbacher, Reinhard

    2016-09-23

    Many problems in biomedicine and other areas of the life sciences can be characterized as control problems, with the goal of finding strategies to change a disease or otherwise undesirable state of a biological system into another, more desirable, state through an intervention, such as a drug or other therapeutic treatment. The identification of such strategies is typically based on a mathematical model of the process to be altered through targeted control inputs. This paper focuses on processes at the molecular level that determine the state of an individual cell, involving signaling or gene regulation. The mathematical model type considered is that of Boolean networks. The potential control targets can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. This paper presents a method for the identification of potential intervention targets in Boolean molecular network models using algebraic techniques. The approach exploits an algebraic representation of Boolean networks to encode the control candidates in the network wiring diagram as the solutions of a system of polynomials equations, and then uses computational algebra techniques to find such controllers. The control methods in this paper are validated through the identification of combinatorial interventions in the signaling pathways of previously reported control targets in two well studied systems, a p53-mdm2 network and a blood T cell lymphocyte granular leukemia survival signaling network. Supplementary data is available online and our code in Macaulay2 and Matlab are available via http://www.ms.uky.edu/~dmu228/ControlAlg . This paper presents a novel method for the identification of intervention targets in Boolean network models. The results in this paper show that the proposed methods are useful and efficient for moderately large networks.

  4. Prediction of the time-dependent failure rate for normally operating components taking into account the operational history

    International Nuclear Information System (INIS)

    Vrbanic, I.; Simic, Z.; Sljivac, D.

    2008-01-01

    The prediction of the time-dependent failure rate has been studied, taking into account the operational history of a component used in applications such as system modeling in a probabilistic safety analysis in order to evaluate the impact of equipment aging and maintenance strategies on the risk measures considered. We have selected a time-dependent model for the failure rate which is based on the Weibull distribution and the principles of proportional age reduction by equipment overhauls. Estimation of the parameters that determine the failure rate is considered, including the definition of the operational history model and likelihood function for the Bayesian analysis of parameters for normally operating repairable components. The operational history is provided as a time axis with defined times of overhauls and failures. An example for demonstration is described with prediction of the future behavior for seven different operational histories. (orig.)

  5. Boolean and advanced searching for EDGAR data on www.sec.gov

    Data.gov (United States)

    Securities and Exchange Commission — This search allows users to enter complex boolean queries to access all but the most recent day's EDGAR filings on www.sec.gov. Filings are from 1994 to present.

  6. Sensitivity analysis of efficient solution in vector MINMAX boolean programming problem

    Directory of Open Access Journals (Sweden)

    Vladimir A. Emelichev

    2002-11-01

    Full Text Available We consider a multiple criterion Boolean programming problem with MINMAX partial criteria. The extreme level of independent perturbations of partial criteria parameters such that efficient (Pareto optimal solution preserves optimality was obtained.

  7. The value of less connected agents in Boolean networks

    Science.gov (United States)

    Epstein, Daniel; Bazzan, Ana L. C.

    2013-11-01

    In multiagent systems, agents often face binary decisions where one seeks to take either the minority or the majority side. Examples are minority and congestion games in general, i.e., situations that require coordination among the agents in order to depict efficient decisions. In minority games such as the El Farol Bar Problem, previous works have shown that agents may reach appropriate levels of coordination, mostly by looking at the history of past decisions. Not many works consider any kind of structure of the social network, i.e., how agents are connected. Moreover, when structure is indeed considered, it assumes some kind of random network with a given, fixed connectivity degree. The present paper departs from the conventional approach in some ways. First, it considers more realistic network topologies, based on preferential attachments. This is especially useful in social networks. Second, the formalism of random Boolean networks is used to help agents to make decisions given their attachments (for example acquaintances). This is coupled with a reinforcement learning mechanism that allows agents to select strategies that are locally and globally efficient. Third, we use agent-based modeling and simulation, a microscopic approach, which allows us to draw conclusions about individuals and/or classes of individuals. Finally, for the sake of illustration we use two different scenarios, namely the El Farol Bar Problem and a binary route choice scenario. With this approach we target systems that adapt dynamically to changes in the environment, including other adaptive decision-makers. Our results using preferential attachments and random Boolean networks are threefold. First we show that an efficient equilibrium can be achieved, provided agents do experimentation. Second, microscopic analysis show that influential agents tend to consider few inputs in their Boolean functions. Third, we have also conducted measurements related to network clustering and centrality

  8. Confluence of an extension of combinatory logic by Boolean constants

    DEFF Research Database (Denmark)

    Czajka, Łukasz

    2017-01-01

    We show confluence of a conditional term rewriting system CL-pc1, which is an extension of Combinatory Logic by Boolean constants. This solves problem 15 from the RTA list of open problems. The proof has been fully formalized in the Coq proof assistant....

  9. A Construction of Boolean Functions with Good Cryptographic Properties

    Science.gov (United States)

    2014-01-01

    be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT...2008, LNCS 5350, Springer–Verlag, 2008, pp. 425–440. [10] C. Carlet and K. Feng, “An Infinite Class of Balanced Vectorial Boolean Functions with Optimum

  10. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  11. A physical probabilistic model to predict failure rates in buried PVC pipelines

    International Nuclear Information System (INIS)

    Davis, P.; Burn, S.; Moglia, M.; Gould, S.

    2007-01-01

    For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical

  12. Probability versus representativeness in infancy: can infants use naïve physics to adjust population base rates in probabilistic inference?

    Science.gov (United States)

    Denison, Stephanie; Trikutam, Pallavi; Xu, Fei

    2014-08-01

    A rich tradition in developmental psychology explores physical reasoning in infancy. However, no research to date has investigated whether infants can reason about physical objects that behave probabilistically, rather than deterministically. Physical events are often quite variable, in that similar-looking objects can be placed in similar contexts with different outcomes. Can infants rapidly acquire probabilistic physical knowledge, such as some leaves fall and some glasses break by simply observing the statistical regularity with which objects behave and apply that knowledge in subsequent reasoning? We taught 11-month-old infants physical constraints on objects and asked them to reason about the probability of different outcomes when objects were drawn from a large distribution. Infants could have reasoned either by using the perceptual similarity between the samples and larger distributions or by applying physical rules to adjust base rates and estimate the probabilities. Infants learned the physical constraints quickly and used them to estimate probabilities, rather than relying on similarity, a version of the representativeness heuristic. These results indicate that infants can rapidly and flexibly acquire physical knowledge about objects following very brief exposure and apply it in subsequent reasoning. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Boolean analysis reveals systematic interactions among low-abundance species in the human gut microbiome.

    Directory of Open Access Journals (Sweden)

    Jens Christian Claussen

    2017-06-01

    Full Text Available The analysis of microbiome compositions in the human gut has gained increasing interest due to the broader availability of data and functional databases and substantial progress in data analysis methods, but also due to the high relevance of the microbiome in human health and disease. While most analyses infer interactions among highly abundant species, the large number of low-abundance species has received less attention. Here we present a novel analysis method based on Boolean operations applied to microbial co-occurrence patterns. We calibrate our approach with simulated data based on a dynamical Boolean network model from which we interpret the statistics of attractor states as a theoretical proxy for microbiome composition. We show that for given fractions of synergistic and competitive interactions in the model our Boolean abundance analysis can reliably detect these interactions. Analyzing a novel data set of 822 microbiome compositions of the human gut, we find a large number of highly significant synergistic interactions among these low-abundance species, forming a connected network, and a few isolated competitive interactions.

  14. Students’ difficulties in probabilistic problem-solving

    Science.gov (United States)

    Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.

    2018-03-01

    There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.

  15. CIRCUIT IMPLEMENTATION OF VHDL-DESCRIPTIONS OF SYSTEMS OF PARTIAL BOOLEAN FUNCTIONS

    Directory of Open Access Journals (Sweden)

    P. N. Bibilo

    2016-01-01

    Full Text Available Method for description of incompletely specified (partial Boolean functions in VHDL is proposed. Examples of synthesized VHDL models of partial Boolean functions are presented; and the results of experiments on circuit implementation of VHDL descriptions of systems of partial functions. The realizability of original partial functions in logical circuits was verified by formal verification. The results of the experiments show that the preliminary minimization in DNF class and in the class of BDD representations for pseudo-random systems of completely specified functions does not improve practically (and in the case of BDD sometimes worsens the results of the subsequent synthesis in the basis of FPGA unlike the significant efficiency of these procedures for the synthesis of benchmark circuits taken from the practice of the design.

  16. Extreme-value dependence: An application to exchange rate markets

    Science.gov (United States)

    Fernandez, Viviana

    2007-04-01

    Extreme value theory (EVT) focuses on modeling the tail behavior of a loss distribution using only extreme values rather than the whole data set. For a sample of 10 countries with dirty/free float regimes, we investigate whether paired currencies exhibit a pattern of asymptotic dependence. That is, whether an extremely large appreciation or depreciation in the nominal exchange rate of one country might transmit to another. In general, after controlling for volatility clustering and inertia in returns, we do not find evidence of extreme-value dependence between paired exchange rates. However, for asymptotic-independent paired returns, we find that tail dependency of exchange rates is stronger under large appreciations than under large depreciations.

  17. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin

    2013-01-01

    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  18. Cooperation in an evolutionary prisoner’s dilemma game with probabilistic strategies

    International Nuclear Information System (INIS)

    Li Haihong; Dai Qionglin; Cheng Hongyan; Yang Junzhong

    2012-01-01

    Highlights: ► Introducing probabilistic strategies instead of the pure C/D in the PDG. ► The strategies patterns depends on interaction structures and updating rules. ► There exists an optimal increment of the probabilistic strategy. - Abstract: In this work, we investigate an evolutionary prisoner’s dilemma game in structured populations with probabilistic strategies instead of the pure strategies of cooperation and defection. We explore the model in details by considering different strategy update rules and different population structures. We find that the distribution of probabilistic strategies patterns is dependent on both the interaction structures and the updating rules. We also find that, when an individual updates her strategy by increasing or decreasing her probabilistic strategy a certain amount towards that of her opponent, there exists an optimal increment of the probabilistic strategy at which the cooperator frequency reaches its maximum.

  19. The Boolean algebra and central Galois algebras

    Directory of Open Access Journals (Sweden)

    George Szeto

    2001-01-01

    Full Text Available Let B be a Galois algebra with Galois group G, Jg={b∈B∣bx=g(xb   for all   x∈B} for g∈G, and BJg=Beg for a central idempotent eg. Then a relation is given between the set of elements in the Boolean algebra (Ba,≤ generated by {0,eg∣g∈G} and a set of subgroups of G, and a central Galois algebra Be with a Galois subgroup of G is characterized for an e∈Ba.

  20. Analysis and control of Boolean networks a semi-tensor product approach

    CERN Document Server

    Cheng, Daizhan; Li, Zhiqiang

    2010-01-01

    This book presents a new approach to the investigation of Boolean control networks, using the semi-tensor product (STP), which can express a logical function as a conventional discrete-time linear system. This makes it possible to analyze basic control problems.

  1. Multiobjective optimal allocation problem with probabilistic non ...

    African Journals Online (AJOL)

    This paper considers the optimum compromise allocation in multivariate stratified sampling with non-linear objective function and probabilistic non-linear cost constraint. The probabilistic non-linear cost constraint is converted into equivalent deterministic one by using Chance Constrained programming. A numerical ...

  2. A GA-P algorithm to automatically formulate extended Boolean queries for a fuzzy information retrieval system

    OpenAIRE

    Cordón García, Oscar; Moya Anegón, Félix de; Zarco Fernández, Carmen

    2000-01-01

    [ES] Although the fuzzy retrieval model constitutes a powerful extension of the boolean one, being able to deal with the imprecision and subjectivity existing in the Information Retrieval process, users are not usually able to express their query requirements in the form of an extended boolean query including weights. To solve this problem, different tools to assist the user in the query formulation have been proposed. In this paper, the genetic algorithm-programming technique is considered t...

  3. Boolean Operations with Prism Algebraic Patches

    Science.gov (United States)

    Bajaj, Chandrajit; Paoluzzi, Alberto; Portuesi, Simone; Lei, Na; Zhao, Wenqi

    2009-01-01

    In this paper we discuss a symbolic-numeric algorithm for Boolean operations, closed in the algebra of curved polyhedra whose boundary is triangulated with algebraic patches (A-patches). This approach uses a linear polyhedron as a first approximation of both the arguments and the result. On each triangle of a boundary representation of such linear approximation, a piecewise cubic algebraic interpolant is built, using a C1-continuous prism algebraic patch (prism A-patch) that interpolates the three triangle vertices, with given normal vectors. The boundary representation only stores the vertices of the initial triangulation and their external vertex normals. In order to represent also flat and/or sharp local features, the corresponding normal-per-face and/or normal-per-edge may be also given, respectively. The topology is described by storing, for each curved triangle, the two triples of pointers to incident vertices and to adjacent triangles. For each triangle, a scaffolding prism is built, produced by its extreme vertices and normals, which provides a containment volume for the curved interpolating A-patch. When looking for the result of a regularized Boolean operation, the 0-set of a tri-variate polynomial within each such prism is generated, and intersected with the analogous 0-sets of the other curved polyhedron, when two prisms have non-empty intersection. The intersection curves of the boundaries are traced and used to decompose each boundary into the 3 standard classes of subpatches, denoted in, out and on. While tracing the intersection curves, the locally refined triangulation of intersecting patches is produced, and added to the boundary representation. PMID:21516262

  4. PARAMETER ESTIMATION IN NON-HOMOGENEOUS BOOLEAN MODELS: AN APPLICATION TO PLANT DEFENSE RESPONSE

    Directory of Open Access Journals (Sweden)

    Maria Angeles Gallego

    2014-11-01

    Full Text Available Many medical and biological problems require to extract information from microscopical images. Boolean models have been extensively used to analyze binary images of random clumps in many scientific fields. In this paper, a particular type of Boolean model with an underlying non-stationary point process is considered. The intensity of the underlying point process is formulated as a fixed function of the distance to a region of interest. A method to estimate the parameters of this Boolean model is introduced, and its performance is checked in two different settings. Firstly, a comparative study with other existent methods is done using simulated data. Secondly, the method is applied to analyze the longleaf data set, which is a very popular data set in the context of point processes included in the R package spatstat. Obtained results show that the new method provides as accurate estimates as those obtained with more complex methods developed for the general case. Finally, to illustrate the application of this model and this method, a particular type of phytopathological images are analyzed. These images show callose depositions in leaves of Arabidopsis plants. The analysis of callose depositions, is very popular in the phytopathological literature to quantify activity of plant immunity.

  5. Vector Boolean Functions: applications in symmetric cryptography

    OpenAIRE

    Álvarez Cubero, José Antonio

    2015-01-01

    Esta tesis establece los fundamentos teóricos y diseña una colección abierta de clases C++ denominada VBF (Vector Boolean Functions) para analizar funciones booleanas vectoriales (funciones que asocian un vector booleano a otro vector booleano) desde una perspectiva criptográfica. Esta nueva implementación emplea la librería NTL de Victor Shoup, incorporando nuevos módulos que complementan a las funciones de NTL, adecuándolas para el análisis criptográfico. La clase fundamental que representa...

  6. Adapted Boolean network models for extracellular matrix formation

    Directory of Open Access Journals (Sweden)

    Wollbold Johannes

    2009-07-01

    Full Text Available Abstract Background Due to the rapid data accumulation on pathogenesis and progression of chronic inflammation, there is an increasing demand for approaches to analyse the underlying regulatory networks. For example, rheumatoid arthritis (RA is a chronic inflammatory disease, characterised by joint destruction and perpetuated by activated synovial fibroblasts (SFB. These abnormally express and/or secrete pro-inflammatory cytokines, collagens causing joint fibrosis, or tissue-degrading enzymes resulting in destruction of the extra-cellular matrix (ECM. We applied three methods to analyse ECM regulation: data discretisation to filter out noise and to reduce complexity, Boolean network construction to implement logic relationships, and formal concept analysis (FCA for the formation of minimal, but complete rule sets from the data. Results First, we extracted literature information to develop an interaction network containing 18 genes representing ECM formation and destruction. Subsequently, we constructed an asynchronous Boolean network with biologically plausible time intervals for mRNA and protein production, secretion, and inactivation. Experimental gene expression data was obtained from SFB stimulated by TGFβ1 or by TNFα and discretised thereafter. The Boolean functions of the initial network were improved iteratively by the comparison of the simulation runs to the experimental data and by exploitation of expert knowledge. This resulted in adapted networks for both cytokine stimulation conditions. The simulations were further analysed by the attribute exploration algorithm of FCA, integrating the observed time series in a fine-tuned and automated manner. The resulting temporal rules yielded new contributions to controversially discussed aspects of fibroblast biology (e.g., considerable expression of TNF and MMP9 by fibroblasts stimulation and corroborated previously known facts (e.g., co-expression of collagens and MMPs after TNF

  7. A framework for the probabilistic analysis of meteotsunamis

    Science.gov (United States)

    Geist, Eric L.; ten Brink, Uri S.; Gove, Matthew D.

    2014-01-01

    A probabilistic technique is developed to assess the hazard from meteotsunamis. Meteotsunamis are unusual sea-level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation is proposed for the probabilistic analysis, based on previous frameworks established for both tsunamis and storm surges, incorporating different sources and source parameters of meteotsunamis. Parameterization of atmospheric disturbances and numerical modeling is performed for the computation of maximum meteotsunami wave amplitudes near the coast. A historical record of pressure disturbances is used to establish a continuous analytic distribution of each parameter as well as the overall Poisson rate of occurrence. A demonstration study is presented for the northeast U.S. in which only isolated atmospheric pressure disturbances from squall lines and derechos are considered. For this study, Automated Surface Observing System stations are used to determine the historical parameters of squall lines from 2000 to 2013. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of squall lines is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a numerical hydrodynamic model. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using multiple synthetic catalogs, resampled from the parent parameter distributions, yield mean and quantile hazard curves. Further refinements and improvements for probabilistic analysis of meteotsunamis are discussed.

  8. Boolean Functions with a Simple Certificate for CNF Complexity

    Czech Academy of Sciences Publication Activity Database

    Čepek, O.; Kučera, P.; Savický, Petr

    2012-01-01

    Roč. 160, 4-5 (2012), s. 365-382 ISSN 0166-218X R&D Projects: GA MŠk(CZ) 1M0545 Grant - others:GA ČR(CZ) GP201/07/P168; GA ČR(CZ) GAP202/10/1188 Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean functions * CNF representations Subject RIV: BA - General Mathematics Impact factor: 0.718, year: 2012

  9. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Derivations and Verification of Plans. Volume 1

    Science.gov (United States)

    Johnson, Kenneth L.; White, K, Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.

  10. A new separation algorithm for the Boolean quadric and cut polytopes

    DEFF Research Database (Denmark)

    Sørensen, Michael Malmros; Letchford, Adam N.

    2014-01-01

    A separation algorithm is a procedure for generating cutting planes. Up to now, only a few polynomial-time separation algorithms were known for the Boolean quadric and cut polytopes. These polytopes arise in connection with zero–one quadratic programming and the max-cut problem, respectively. We...

  11. Emigration Rates From Sample Surveys: An Application to Senegal.

    Science.gov (United States)

    Willekens, Frans; Zinn, Sabine; Leuchter, Matthias

    2017-12-01

    What is the emigration rate of a country, and how reliable is that figure? Answering these questions is not at all straightforward. Most data on international migration are census data on foreign-born population. These migrant stock data describe the immigrant population in destination countries but offer limited information on the rate at which people leave their country of origin. The emigration rate depends on the number leaving in a given period and the population at risk of leaving, weighted by the duration at risk. Emigration surveys provide a useful data source for estimating emigration rates, provided that the estimation method accounts for sample design. In this study, emigration rates and confidence intervals are estimated from a sample survey of households in the Dakar region in Senegal, which was part of the Migration between Africa and Europe survey. The sample was a stratified two-stage sample with oversampling of households with members abroad or return migrants. A combination of methods of survival analysis (time-to-event data) and replication variance estimation (bootstrapping) yields emigration rates and design-consistent confidence intervals that are representative for the study population.

  12. Probabilistic pipe fracture evaluations for leak-rate-detection applications

    International Nuclear Information System (INIS)

    Rahman, S.; Ghadiali, N.; Paul, D.; Wilkowski, G.

    1995-04-01

    Regulatory Guide 1.45, open-quotes Reactor Coolant Pressure Boundary Leakage Detection Systems,close quotes was published by the U.S. Nuclear Regulatory Commission (NRC) in May 1973, and provides guidance on leak detection methods and system requirements for Light Water Reactors. Additionally, leak detection limits are specified in plant Technical Specifications and are different for Boiling Water Reactors (BWRs) and Pressurized Water Reactors (PWRs). These leak detection limits are also used in leak-before-break evaluations performed in accordance with Draft Standard Review Plan, Section 3.6.3, open-quotes Leak Before Break Evaluation Proceduresclose quotes where a margin of 10 on the leak detection limit is used in determining the crack size considered in subsequent fracture analyses. This study was requested by the NRC to: (1) evaluate the conditional failure probability for BWR and PWR piping for pipes that were leaking at the allowable leak detection limit, and (2) evaluate the margin of 10 to determine if it was unnecessarily large. A probabilistic approach was undertaken to conduct fracture evaluations of circumferentially cracked pipes for leak-rate-detection applications. Sixteen nuclear piping systems in BWR and PWR plants were analyzed to evaluate conditional failure probability and effects of crack-morphology variability on the current margins used in leak rate detection for leak-before-break

  13. Two Expectation-Maximization Algorithms for Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2014-01-01

    Roč. 130, 23 April (2014), s. 83-97 ISSN 0925-2312 R&D Projects: GA ČR GAP202/10/0262 Grant - others:GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Program:ED Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean Factor analysis * Binary Matrix factorization * Neural networks * Binary data model * Dimension reduction * Bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014

  14. Multipath Detection Using Boolean Satisfiability Techniques

    Directory of Open Access Journals (Sweden)

    Fadi A. Aloul

    2011-01-01

    Full Text Available A new technique for multipath detection in wideband mobile radio systems is presented. The proposed scheme is based on an intelligent search algorithm using Boolean Satisfiability (SAT techniques to search through the uncertainty region of the multipath delays. The SAT-based scheme utilizes the known structure of the transmitted wideband signal, for example, pseudo-random (PN code, to effectively search through the entire space by eliminating subspaces that do not contain a possible solution. The paper presents a framework for modeling the multipath detection problem as a SAT application. It also provides simulation results that demonstrate the effectiveness of the proposed scheme in detecting the multipath components in frequency-selective Rayleigh fading channels.

  15. The Boolean algebra of Galois algebras

    Directory of Open Access Journals (Sweden)

    Lianyong Xue

    2003-02-01

    Full Text Available Let B be a Galois algebra with Galois group G, Jg={b∈B|bx=g(xb for all x∈B} for each g∈G, and BJg=Beg for a central idempotent eg, Ba the Boolean algebra generated by {0,eg|g∈G}, e a nonzero element in Ba, and He={g∈G|eeg=e}. Then, a monomial e is characterized, and the Galois extension Be, generated by e with Galois group He, is investigated.

  16. Describing the What and Why of Students' Difficulties in Boolean Logic

    Science.gov (United States)

    Herman, Geoffrey L.; Loui, Michael C.; Kaczmarczyk, Lisa; Zilles, Craig

    2012-01-01

    The ability to reason with formal logic is a foundational skill for computer scientists and computer engineers that scaffolds the abilities to design, debug, and optimize. By interviewing students about their understanding of propositional logic and their ability to translate from English specifications to Boolean expressions, we characterized…

  17. Application of fuzzy logic to Boolean models for digital soil assessment

    NARCIS (Netherlands)

    Gruijter, de J.J.; Walvoort, D.J.J.; Bragato, G.

    2011-01-01

    Boolean models based on expert knowledge are often used to classify soils into a limited number of classes of a difficult-to-measure soil attribute. Although the primary data used for these classifications contain information on whether the soil is a typical class member or a boundary case between

  18. On the treatment of dependence in making decisions about risk

    International Nuclear Information System (INIS)

    Bier, V.M.

    1989-01-01

    Much attention has been paid to the treatment of dependence in performing probabilistic risk assessments (PRA). For instance, causal dependencies (e.g., common cause failures, cascade failures, and intersystem dependencies) have been taken into account in PRAs beginning with the Reactor Safety Study (USNRC, 1975). In addition, beginning in the early 1980s, attention began to be paid to the issue of probabilistic dependence between the failure rates (Apostolakis and Kaplan, 1981) or seismic fragilities (Kaplan, 1985) of similar components, and the impact of such dependence on risk estimates. By now, it has been clearly demonstrated that failure to take either casual or probabilistic dependence into account in PRAs can lead to misleading results, typically underestimates of the true risk. However, there has been little attention to date on the effects of dependence in the area of decision making. The objectives of this paper are to illustrate the potential importance of dependence in making decisions about risks, and to present some ideas on how to communicate the effects of dependence to decision makers in a clear and easily comprehensible manner

  19. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    International Nuclear Information System (INIS)

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-01-01

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L 2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods

  20. Stability of Boolean multilevel networks.

    Science.gov (United States)

    Cozzo, Emanuele; Arenas, Alex; Moreno, Yamir

    2012-09-01

    The study of the interplay between the structure and dynamics of complex multilevel systems is a pressing challenge nowadays. In this paper, we use a semiannealed approximation to study the stability properties of random Boolean networks in multiplex (multilayered) graphs. Our main finding is that the multilevel structure provides a mechanism for the stabilization of the dynamics of the whole system even when individual layers work on the chaotic regime, therefore identifying new ways of feedback between the structure and the dynamics of these systems. Our results point out the need for a conceptual transition from the physics of single-layered networks to the physics of multiplex networks. Finally, the fact that the coupling modifies the phase diagram and the critical conditions of the isolated layers suggests that interdependency can be used as a control mechanism.

  1. Next-generation probabilistic seismicity forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Hiemer, S.

    2014-07-01

    The development of probabilistic seismicity forecasts is one of the most important tasks of seismologists at present time. Such forecasts form the basis of probabilistic seismic hazard assessment, a widely used approach to generate ground motion exceedance maps. These hazard maps guide the development of building codes, and in the absence of the ability to deterministically predict earthquakes, good building and infrastructure planning is key to prevent catastrophes. Probabilistic seismicity forecasts are models that specify the occurrence rate of earthquakes as a function of space, time and magnitude. The models presented in this thesis are time-invariant mainshock occurrence models. Accordingly, the reliable estimation of the spatial and size distribution of seismicity are of crucial importance when constructing such probabilistic forecasts. Thereby we focus on data-driven approaches to infer these distributions, circumventing the need for arbitrarily chosen external parameters and subjective expert decisions. Kernel estimation has been shown to appropriately transform discrete earthquake locations into spatially continuous probability distributions. However, we show that neglecting the information from fault networks constitutes a considerable shortcoming and thus limits the skill of these current seismicity models. We present a novel earthquake rate forecast that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults applied to Californian and European data. Our model is independent from biases caused by commonly used non-objective seismic zonations, which impose artificial borders of activity that are not expected in nature. Studying the spatial variability of the seismicity size distribution is of great importance. The b-value of the well-established empirical Gutenberg-Richter model forecasts the rates of hazard-relevant large earthquakes based on the observed rates of abundant small events. We propose a

  2. Next-generation probabilistic seismicity forecasting

    International Nuclear Information System (INIS)

    Hiemer, S.

    2014-01-01

    The development of probabilistic seismicity forecasts is one of the most important tasks of seismologists at present time. Such forecasts form the basis of probabilistic seismic hazard assessment, a widely used approach to generate ground motion exceedance maps. These hazard maps guide the development of building codes, and in the absence of the ability to deterministically predict earthquakes, good building and infrastructure planning is key to prevent catastrophes. Probabilistic seismicity forecasts are models that specify the occurrence rate of earthquakes as a function of space, time and magnitude. The models presented in this thesis are time-invariant mainshock occurrence models. Accordingly, the reliable estimation of the spatial and size distribution of seismicity are of crucial importance when constructing such probabilistic forecasts. Thereby we focus on data-driven approaches to infer these distributions, circumventing the need for arbitrarily chosen external parameters and subjective expert decisions. Kernel estimation has been shown to appropriately transform discrete earthquake locations into spatially continuous probability distributions. However, we show that neglecting the information from fault networks constitutes a considerable shortcoming and thus limits the skill of these current seismicity models. We present a novel earthquake rate forecast that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults applied to Californian and European data. Our model is independent from biases caused by commonly used non-objective seismic zonations, which impose artificial borders of activity that are not expected in nature. Studying the spatial variability of the seismicity size distribution is of great importance. The b-value of the well-established empirical Gutenberg-Richter model forecasts the rates of hazard-relevant large earthquakes based on the observed rates of abundant small events. We propose a

  3. Improved detection of chemical substances from colorimetric sensor data using probabilistic machine learning

    DEFF Research Database (Denmark)

    Mølgaard, Lasse Lohilahti; Buus, Ole Thomsen; Larsen, Jan

    2017-01-01

    We present a data-driven machine learning approach to detect drug- and explosives-precursors using colorimetric sensor technology for air-sampling. The sensing technology has been developed in the context of the CRIM-TRACK project. At present a fully- integrated portable prototype for air sampling...... of the highly multi-variate data produced from the colorimetric chip a number of machine learning techniques are employed to provide reliable classification of target analytes from confounders found in the air streams. We demonstrate that a data-driven machine learning method using dimensionality reduction...... in combination with a probabilistic classifier makes it possible to produce informative features and a high detection rate of analytes. Furthermore, the probabilistic machine learning approach provides a means of automatically identifying unreliable measurements that could produce false predictions...

  4. Standardized approach for developing probabilistic exposure factor distributions

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, Randy L.; McKone, Thomas E.; Sohn, Michael D.

    2003-03-01

    The effectiveness of a probabilistic risk assessment (PRA) depends critically on the quality of input information that is available to the risk assessor and specifically on the probabilistic exposure factor distributions that are developed and used in the exposure and risk models. Deriving probabilistic distributions for model inputs can be time consuming and subjective. The absence of a standard approach for developing these distributions can result in PRAs that are inconsistent and difficult to review by regulatory agencies. We present an approach that reduces subjectivity in the distribution development process without limiting the flexibility needed to prepare relevant PRAs. The approach requires two steps. First, we analyze data pooled at a population scale to (1) identify the most robust demographic variables within the population for a given exposure factor, (2) partition the population data into subsets based on these variables, and (3) construct archetypal distributions for each subpopulation. Second, we sample from these archetypal distributions according to site- or scenario-specific conditions to simulate exposure factor values and use these values to construct the scenario-specific input distribution. It is envisaged that the archetypal distributions from step 1 will be generally applicable so risk assessors will not have to repeatedly collect and analyze raw data for each new assessment. We demonstrate the approach for two commonly used exposure factors--body weight (BW) and exposure duration (ED)--using data for the U.S. population. For these factors we provide a first set of subpopulation based archetypal distributions along with methodology for using these distributions to construct relevant scenario-specific probabilistic exposure factor distributions.

  5. The Contrast Effect in Temporal and Probabilistic Discounting

    Science.gov (United States)

    Chen, Cheng; He, Guibing

    2016-01-01

    In this information age, messages related to time, and uncertainty surround us. At the same time, our daily lives are filled with decisions accompanied by temporal delay or uncertainty. Will such information influence our temporal and probabilistic discounting? The authors address this question from the perspectives of decision by sampling (DbS) theory and psychological distance theory. Studies 1 and 2 investigated the effect of contextual messages on temporal discounting and probabilistic discounting, respectively. The results indicated that participants who memorized messages about long-term and low-probability events rated delay or uncertainty as mentally closer and exhibited a less degree of value discounting than those who memorized messages regarding short-term and high-probability events. In addition, a sense of distance from present or reality mediated the effect of contextual messages on value discounting. The implications of the current findings for theory and applications are discussed. PMID:27014122

  6. Generación de Patrones de Evaluación Probabilista del Mantenimiento. // Generation of Probabilistic Evaluation Patterns for Maintenance.

    Directory of Open Access Journals (Sweden)

    A. Torres Valle

    2004-05-01

    Full Text Available La ecuación booleana representada por los conjuntos mínimos de corte a nivel de un sistema o de todo un AnálisisProbabilista de Seguridad (APS ha sido utilizada para evaluar las configuraciones de salida de servicio de equipos durantela explotación. Ello se ha realizado a través de aplicaciones de APS para la optimización de regímenes operacionales. Laobtención de tales ecuaciones booleanas demanda de un proceso de alta complejidad matemática por lo que se requierenestudios exhaustivos, incluso para la aplicación de sus resultados. Las importantes ventajas para el mantenimiento que seobtienen de estas aplicaciones requieren el desarrollo de metodologías que acerquen estos métodos de optimización a lossistemas de gestión del mantenimiento, y faciliten su uso por un personal no especializado en los temas probabilistas. Elartículo presenta una metodología para la preparación, solución y utilización de los resultados de los árboles de fallospartiendo de esquemas tecnológicos. Este algoritmo ha sido automatizado a través del código MOSEG Win Ver 1.0.Palabras claves: Análisis probabilista de seguridad, esquemas tecnológicos, árboles de fallos, modos de fallo.____________________________________________________________________________Abstract.The Boolean equation represented by the minimal cut sets both at a system and at a Probabilistic SafetyAnalysis (PSA levels has been used to evaluate the out-of-order equipment configurations during operation.This analysis has been used during PSA applications for the optimization of facility operational states. A highmathematical complexity is usually demanded as part of the process for generating such Boolean equations.Exhaustive studies are also required even for applying their results. Given the advantages obtained from theseapplications, there is an arising need for developing methodologies that incorporate such methods into themaintenance management facilitating their use by alien

  7. Message passing for quantified Boolean formulas

    International Nuclear Information System (INIS)

    Zhang, Pan; Ramezanpour, Abolfazl; Zecchina, Riccardo; Zdeborová, Lenka

    2012-01-01

    We introduce two types of message passing algorithms for quantified Boolean formulas (QBF). The first type is a message passing based heuristics that can prove unsatisfiability of the QBF by assigning the universal variables in such a way that the remaining formula is unsatisfiable. In the second type, we use message passing to guide branching heuristics of a Davis–Putnam–Logemann–Loveland (DPLL) complete solver. Numerical experiments show that on random QBFs our branching heuristics give robust exponential efficiency gain with respect to state-of-the-art solvers. We also manage to solve some previously unsolved benchmarks from the QBFLIB library. Apart from this, our study sheds light on using message passing in small systems and as subroutines in complete solvers

  8. Optical reversible programmable Boolean logic unit.

    Science.gov (United States)

    Chattopadhyay, Tanay

    2012-07-20

    Computing with reversibility is the only way to avoid dissipation of energy associated with bit erase. So, a reversible microprocessor is required for future computing. In this paper, a design of a simple all-optical reversible programmable processor is proposed using a polarizing beam splitter, liquid crystal-phase spatial light modulators, a half-wave plate, and plane mirrors. This circuit can perform 16 logical operations according to three programming inputs. Also, inputs can be easily recovered from the outputs. It is named the "reversible programmable Boolean logic unit (RPBLU)." The logic unit is the basic building block of many complex computational operations. Hence the design is important in sense. Two orthogonally polarized lights are defined here as two logical states, respectively.

  9. Probabilistic composition of preferences, theory and applications

    CERN Document Server

    Parracho Sant'Anna, Annibal

    2015-01-01

    Putting forward a unified presentation of the features and possible applications of probabilistic preferences composition, and serving as a methodology for decisions employing multiple criteria, this book maximizes reader insights into the evaluation in probabilistic terms and the development of composition approaches that do not depend on assigning weights to the criteria. With key applications in important areas of management such as failure modes, effects analysis and productivity analysis – together with explanations about the application of the concepts involved –this book makes available numerical examples of probabilistic transformation development and probabilistic composition. Useful not only as a reference source for researchers, but also in teaching classes of graduate courses in Production Engineering and Management Science, the key themes of the book will be of especial interest to researchers in the field of Operational Research.

  10. Boolean Dynamic Modeling Approaches to Study Plant Gene Regulatory Networks: Integration, Validation, and Prediction.

    Science.gov (United States)

    Velderraín, José Dávila; Martínez-García, Juan Carlos; Álvarez-Buylla, Elena R

    2017-01-01

    Mathematical models based on dynamical systems theory are well-suited tools for the integration of available molecular experimental data into coherent frameworks in order to propose hypotheses about the cooperative regulatory mechanisms driving developmental processes. Computational analysis of the proposed models using well-established methods enables testing the hypotheses by contrasting predictions with observations. Within such framework, Boolean gene regulatory network dynamical models have been extensively used in modeling plant development. Boolean models are simple and intuitively appealing, ideal tools for collaborative efforts between theorists and experimentalists. In this chapter we present protocols used in our group for the study of diverse plant developmental processes. We focus on conceptual clarity and practical implementation, providing directions to the corresponding technical literature.

  11. Trait-Dependent Biogeography: (Re)Integrating Biology into Probabilistic Historical Biogeographical Models.

    Science.gov (United States)

    Sukumaran, Jeet; Knowles, L Lacey

    2018-04-20

    The development of process-based probabilistic models for historical biogeography has transformed the field by grounding it in modern statistical hypothesis testing. However, most of these models abstract away biological differences, reducing species to interchangeable lineages. We present here the case for reintegration of biology into probabilistic historical biogeographical models, allowing a broader range of questions about biogeographical processes beyond ancestral range estimation or simple correlation between a trait and a distribution pattern, as well as allowing us to assess how inferences about ancestral ranges themselves might be impacted by differential biological traits. We show how new approaches to inference might cope with the computational challenges resulting from the increased complexity of these trait-based historical biogeographical models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Probabilistic programming in Python using PyMC3

    Directory of Open Access Journals (Sweden)

    John Salvatier

    2016-04-01

    Full Text Available Probabilistic programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC sampling allow inference on increasingly complex models. This class of MCMC, known as Hamiltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source probabilistic programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other probabilistic programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package.

  13. A Probabilistic Approach to Control of Complex Systems and Its Application to Real-Time Pricing

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2014-01-01

    Full Text Available Control of complex systems is one of the fundamental problems in control theory. In this paper, a control method for complex systems modeled by a probabilistic Boolean network (PBN is studied. A PBN is widely used as a model of complex systems such as gene regulatory networks. For a PBN, the structural control problem is newly formulated. In this problem, a discrete probability distribution appeared in a PBN is controlled by the continuous-valued input. For this problem, an approximate solution method using a matrix-based representation for a PBN is proposed. Then, the problem is approximated by a linear programming problem. Furthermore, the proposed method is applied to design of real-time pricing systems of electricity. Electricity conservation is achieved by appropriately determining the electricity price over time. The effectiveness of the proposed method is presented by a numerical example on real-time pricing systems.

  14. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization

  15. Simulating Quantitative Cellular Responses Using Asynchronous Threshold Boolean Network Ensembles

    Directory of Open Access Journals (Sweden)

    Shah Imran

    2011-07-01

    Full Text Available Abstract Background With increasing knowledge about the potential mechanisms underlying cellular functions, it is becoming feasible to predict the response of biological systems to genetic and environmental perturbations. Due to the lack of homogeneity in living tissues it is difficult to estimate the physiological effect of chemicals, including potential toxicity. Here we investigate a biologically motivated model for estimating tissue level responses by aggregating the behavior of a cell population. We assume that the molecular state of individual cells is independently governed by discrete non-deterministic signaling mechanisms. This results in noisy but highly reproducible aggregate level responses that are consistent with experimental data. Results We developed an asynchronous threshold Boolean network simulation algorithm to model signal transduction in a single cell, and then used an ensemble of these models to estimate the aggregate response across a cell population. Using published data, we derived a putative crosstalk network involving growth factors and cytokines - i.e., Epidermal Growth Factor, Insulin, Insulin like Growth Factor Type 1, and Tumor Necrosis Factor α - to describe early signaling events in cell proliferation signal transduction. Reproducibility of the modeling technique across ensembles of Boolean networks representing cell populations is investigated. Furthermore, we compare our simulation results to experimental observations of hepatocytes reported in the literature. Conclusion A systematic analysis of the results following differential stimulation of this model by growth factors and cytokines suggests that: (a using Boolean network ensembles with asynchronous updating provides biologically plausible noisy individual cellular responses with reproducible mean behavior for large cell populations, and (b with sufficient data our model can estimate the response to different concentrations of extracellular ligands. Our

  16. Probabilistic Decision Graphs - Combining Verification and AI Techniques for Probabilistic Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2004-01-01

    We adopt probabilistic decision graphs developed in the field of automated verification as a tool for probabilistic model representation and inference. We show that probabilistic inference has linear time complexity in the size of the probabilistic decision graph, that the smallest probabilistic ...

  17. Learning Probabilistic Logic Models from Probabilistic Examples.

    Science.gov (United States)

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2008-10-01

    We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.

  18. Probabilistic population aging

    Science.gov (United States)

    2017-01-01

    We merge two methodologies, prospective measures of population aging and probabilistic population forecasts. We compare the speed of change and variability in forecasts of the old age dependency ratio and the prospective old age dependency ratio as well as the same comparison for the median age and the prospective median age. While conventional measures of population aging are computed on the basis of the number of years people have already lived, prospective measures are computed also taking account of the expected number of years they have left to live. Those remaining life expectancies change over time and differ from place to place. We compare the probabilistic distributions of the conventional and prospective measures using examples from China, Germany, Iran, and the United States. The changes over time and the variability of the prospective indicators are smaller than those that are observed in the conventional ones. A wide variety of new results emerge from the combination of methodologies. For example, for Germany, Iran, and the United States the likelihood that the prospective median age of the population in 2098 will be lower than it is today is close to 100 percent. PMID:28636675

  19. Generalized probabilistic scale space for image restoration.

    Science.gov (United States)

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  20. Fast probabilistic file fingerprinting for big data.

    Science.gov (United States)

    Tretyakov, Konstantin; Laur, Sven; Smant, Geert; Vilo, Jaak; Prins, Pjotr

    2013-01-01

    Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff.

  1. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    Science.gov (United States)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  2. Boolean logic and character state identity: pitfalls of character coding in metazoan cladistics

    NARCIS (Netherlands)

    Jenner, Ronald A.

    2002-01-01

    A critical study of the morphological data sets used for the most recent analyses of metazoan cladistics exposes a rather cavalier attitude towards character coding. Binary absence/presence coding is ubiquitous, but without any explicit justification. This uncompromising application of Boolean logic

  3. Design of a Nanoscale, CMOS-Integrable, Thermal-Guiding Structure for Boolean-Logic and Neuromorphic Computation.

    Science.gov (United States)

    Loke, Desmond; Skelton, Jonathan M; Chong, Tow-Chong; Elliott, Stephen R

    2016-12-21

    One of the requirements for achieving faster CMOS electronics is to mitigate the unacceptably large chip areas required to steer heat away from or, more recently, toward the critical nodes of state-of-the-art devices. Thermal-guiding (TG) structures can efficiently direct heat by "meta-materials" engineering; however, some key aspects of the behavior of these systems are not fully understood. Here, we demonstrate control of the thermal-diffusion properties of TG structures by using nanometer-scale, CMOS-integrable, graphene-on-silica stacked materials through finite-element-methods simulations. It has been shown that it is possible to implement novel, controllable, thermally based Boolean-logic and spike-timing-dependent plasticity operations for advanced (neuromorphic) computing applications using such thermal-guide architectures.

  4. Probabilistic finite elements

    Science.gov (United States)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  5. Shear-Rate-Dependent Behavior of Clayey Bimaterial Interfaces at Landslide Stress Levels

    Science.gov (United States)

    Scaringi, Gianvito; Hu, Wei; Xu, Qiang; Huang, Runqiu

    2018-01-01

    The behavior of reactivated and first-failure landslides after large displacements is controlled by the available shear resistance in a shear zone and/or along slip surfaces, such as a soil-bedrock interface. Among the factors influencing the resistance parameter, the dependence on the shear rate can trigger catastrophic evolution (rate-weakening) or exert a slow-down feedback (rate-strengthening) upon stress perturbation. We present ring-shear test results, performed under various normal stresses and shear rates, on clayey soils from a landslide shear zone, on its parent lithology and other lithologies, and on clay-rock interface samples. We find that depending on the materials in contact, the normal stress, and the stress history, the shear-rate-dependent behaviors differ. We discuss possible models and underlying mechanisms for the time-dependent behavior of landslides in clay soils.

  6. Probabilistic integrity assessment of pressure tubes in an operating pressurized heavy water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Young-Jin; Park, Heung-Bae [KEPCO E and C, 188 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-870 (Korea, Republic of); Lee, Jung-Min; Kim, Young-Jin [School of Mechanical Engineering, Sungkyunkwan University, 300 Chunchun-dong, Jangan-gu, Suwon-si, Gyeonggi-do 440-746 (Korea, Republic of); Ko, Han-Ok [Korea Institute of Nuclear Safety, 34 Gwahak-ro, Yuseong-gu, Daejeon-si 305-338 (Korea, Republic of); Chang, Yoon-Suk, E-mail: yschang@khu.ac.kr [Department of Nuclear Engineering, Kyung Hee University, 1 Seocheon-dong, Giheung-gu, Yongin-si, Gyeonggi-do 446-701 (Korea, Republic of)

    2012-02-15

    Even though pressure tubes are major components of a pressurized heavy water reactor (PHWR), only small proportions of pressure tubes are sampled for inspection due to limited inspection time and costs. Since the inspection scope and integrity evaluation have been treated by using a deterministic approach in general, a set of conservative data was used instead of all known information related to in-service degradation mechanisms because of inherent uncertainties in the examination. Recently, in order that pressure tube degradations identified in a sample of inspected pressure tubes are taken into account to address the balance of the uninspected ones in the reactor core, a probabilistic approach has been introduced. In the present paper, probabilistic integrity assessments of PHWR pressure tubes were carried out based on accumulated operating experiences and enhanced technology. Parametric analyses on key variables were conducted, which were periodically measured by in-service inspection program, such as deuterium uptake rate, dimensional change rate of pressure tube and flaw size distribution. Subsequently, a methodology to decide optimum statistical distribution by using a robust method adopting a genetic algorithm was proposed and applied to the most influential variable to verify the reliability of the proposed method. Finally, pros and cons of the alternative distributions comparing with corresponding ones derived from the traditional method as well as technical findings from the statistical assessment were discussed to show applicability to the probabilistic assessment of pressure tubes.

  7. A boolean optimization method for reloading a nuclear reactor

    International Nuclear Information System (INIS)

    Misse Nseke, Theophile.

    1982-04-01

    We attempt to solve the problem of optimal reloading of fuel assemblies in a PWR, without any assumption on the fuel nature. Any loading is marked by n 2 boolean variables usub(ij). The state of the reactor is characterized by his Ksub(eff) and the related power distribution. The resulting non-linear allocation problems are solved throught mathematical programming technics combining the simplex algorithm and an extension of the Balas-Geoffrion's one. Some optimal solutions are given for PWR with assemblies of different enrichment [fr

  8. A study on the dependency evaluation for multiple human actions in human reliability analysis of probabilistic safety assessment

    International Nuclear Information System (INIS)

    Kang, D. I.; Yang, J. E.; Jung, W. D.; Sung, T. Y.; Park, J. H.; Lee, Y. H.; Hwang, M. J.; Kim, K. Y.; Jin, Y. H.; Kim, S. C.

    1997-02-01

    This report describes the study results on the method of the dependency evaluation and the modeling, and the limited value of human error probability (HEP) for multiple human actions in accident sequences of probabilistic safety assessment (PSA). THERP and Parry's method, which have been generally used in dependency evaluation of human reliability analysis (HRA), are introduced and their limitations are discussed. New dependency evaluation method in HRA is established to make up for the weak points of THERP and Parry's methods. The limited value of HEP is also established based on the review of several HRA related documents. This report describes the definition, the type, the evaluation method, and the evaluation example of dependency to help the reader's understanding. It is expected that this study results will give a guidance to HRA analysts in dependency evaluation of multiple human actions and enable PSA analysts to understand HRA in detail. (author). 23 refs., 3 tabs., 2 figs

  9. Financial and Real Sector Leading Indicators of Recessions in Brazil Using Probabilistic Models

    Directory of Open Access Journals (Sweden)

    Fernando Nascimento de Oliveira

    Full Text Available We examine the usefulness of various financial and real sector variables to forecast recessions in Brazil between one and eight quarters ahead. We estimate probabilistic models of recession and select models based on their outof-sample forecasts, using the Receiver Operating Characteristic (ROC function. We find that the predictive out-of-sample ability of several models vary depending on the numbers of quarters ahead to forecast and on the number of regressors used in the model specification. The models selected seem to be relevant to give early warnings of recessions in Brazil.

  10. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  11. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Science.gov (United States)

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be

  12. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  13. Boolean-Like and Frequentistic Nonstandard Semantics for First-Order Predicate Calculus without Functions

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    2001-01-01

    Roč. 5, č. 1 (2001), s. 45-57 ISSN 1432-7643 R&D Projects: GA AV ČR IAA1030803 Institutional research plan: AV0Z1030915 Keywords : first-order predicate calculus * standard semantics * Boolean-like semantics * frequentistic semantics * completness theorems Subject RIV: BA - General Mathematics

  14. Does a more sophisticated storm erosion model improve probabilistic erosion estimates?

    NARCIS (Netherlands)

    Ranasinghe, R.W.M.R.J.B.; Callaghan, D.; Roelvink, D.

    2013-01-01

    The dependency between the accuracy/uncertainty of storm erosion exceedance estimates obtained via a probabilistic model and the level of sophistication of the structural function (storm erosion model) embedded in the probabilistic model is assessed via the application of Callaghan et al.'s (2008)

  15. Error Immune Logic for Low-Power Probabilistic Computing

    Directory of Open Access Journals (Sweden)

    Bo Marr

    2010-01-01

    design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.

  16. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  17. A probabilistic model of RNA conformational space

    DEFF Research Database (Denmark)

    Frellsen, Jes; Moltke, Ida; Thiim, Martin

    2009-01-01

    efficient sampling of RNA conformations in continuous space, and with associated probabilities. We show that the model captures several key features of RNA structure, such as its rotameric nature and the distribution of the helix lengths. Furthermore, the model readily generates native-like 3-D......, the discrete nature of the fragments necessitates the use of carefully tuned, unphysical energy functions, and their non-probabilistic nature impairs unbiased sampling. We offer a solution to the sampling problem that removes these important limitations: a probabilistic model of RNA structure that allows......The increasing importance of non-coding RNA in biology and medicine has led to a growing interest in the problem of RNA 3-D structure prediction. As is the case for proteins, RNA 3-D structure prediction methods require two key ingredients: an accurate energy function and a conformational sampling...

  18. Equilibrium and nonequilibrium properties of Boolean decision problems on scale-free graphs with competing interactions with external biases

    Science.gov (United States)

    Zhu, Zheng; Andresen, Juan Carlos; Janzen, Katharina; Katzgraber, Helmut G.

    2013-03-01

    We study the equilibrium and nonequilibrium properties of Boolean decision problems with competing interactions on scale-free graphs in a magnetic field. Previous studies at zero field have shown a remarkable equilibrium stability of Boolean variables (Ising spins) with competing interactions (spin glasses) on scale-free networks. When the exponent that describes the power-law decay of the connectivity of the network is strictly larger than 3, the system undergoes a spin-glass transition. However, when the exponent is equal to or less than 3, the glass phase is stable for all temperatures. First we perform finite-temperature Monte Carlo simulations in a field to test the robustness of the spin-glass phase and show, in agreement with analytical calculations, that the system exhibits a de Almeida-Thouless line. Furthermore, we study avalanches in the system at zero temperature to see if the system displays self-organized criticality. This would suggest that damage (avalanches) can spread across the whole system with nonzero probability, i.e., that Boolean decision problems on scale-free networks with competing interactions are fragile when not in thermal equilibrium.

  19. Probabilistic Accident Progression Analysis with application to a LMFBR design

    International Nuclear Information System (INIS)

    Jamali, K.M.

    1982-01-01

    A method for probabilistic analysis of accident sequences in nuclear power plant systems referred to as ''Probabilistic Accident Progression Analysis'' (PAPA) is described. Distinctive features of PAPA include: (1) definition and analysis of initiator-dependent accident sequences on the component level; (2) a new fault-tree simplification technique; (3) a new technique for assessment of the effect of uncertainties in the failure probabilities in the probabilistic ranking of accident sequences; (4) techniques for quantification of dependent failures of similar components, including an iterative technique for high-population components. The methodology is applied to the Shutdown Heat Removal System (SHRS) of the Clinch River Breeder Reactor Plant during its short-term (0 -2 . Major contributors to this probability are the initiators loss of main feedwater system, loss of offsite power, and normal shutdown

  20. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing

    Science.gov (United States)

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2017-01-01

    The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This

  1. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing.

    Science.gov (United States)

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2017-01-01

    The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This

  2. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing

    Directory of Open Access Journals (Sweden)

    Andrew Denovan

    2017-10-01

    Full Text Available The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy, the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT, the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual

  3. Calculating the bidirectional reflectance of natural vegetation covers using Boolean models and geometric optics

    Science.gov (United States)

    Strahler, Alan H.; Li, Xiao-Wen; Jupp, David L. B.

    1991-01-01

    The bidirectional radiance or reflectance of a forest or woodland can be modeled using principles of geometric optics and Boolean models for random sets in a three dimensional space. This model may be defined at two levels, the scene includes four components; sunlight and shadowed canopy, and sunlit and shadowed background. The reflectance of the scene is modeled as the sum of the reflectances of the individual components as weighted by their areal proportions in the field of view. At the leaf level, the canopy envelope is an assemblage of leaves, and thus the reflectance is a function of the areal proportions of sunlit and shadowed leaf, and sunlit and shadowed background. Because the proportions of scene components are dependent upon the directions of irradiance and exitance, the model accounts for the hotspot that is well known in leaf and tree canopies.

  4. User Practices in Keyword and Boolean Searching on an Online Public Access Catalog.

    Science.gov (United States)

    Ensor, Pat

    1992-01-01

    Discussion of keyword and Boolean searching techniques in online public access catalogs (OPACs) focuses on a study conducted at Indiana State University that examined users' attitudes toward searching on NOTIS (Northwestern Online Total Integrated System). Relevant literature is reviewed, and implications for library instruction are suggested. (17…

  5. An efficient algorithm for computing fixed length attractors based on bounded model checking in synchronous Boolean networks with biochemical applications.

    Science.gov (United States)

    Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N

    2015-04-28

    Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.

  6. Interpolation of the discrete logarithm in a finite field of characteristic two by Boolean functions

    DEFF Research Database (Denmark)

    Brandstaetter, Nina; Lange, Tanja; Winterhof, Arne

    2005-01-01

    We obtain bounds on degree, weight, and the maximal Fourier coefficient of Boolean functions interpolating the discrete logarithm in finite fields of characteristic two. These bounds complement earlier results for finite fields of odd characteristic....

  7. Characterization of Boolean Algebras in Terms of Certain States of Jauch-Piron Type

    Science.gov (United States)

    Matoušek, Milan; Pták, Pavel

    2015-12-01

    Suppose that L is an orthomodular lattice (a quantum logic). We show that L is Boolean exactly if L possesses a strongly unital set of weakly Jauch-Piron states, or if L possesses a unital set of weakly positive states. We also discuss some general properties of Jauch-Piron-like states.

  8. The management of subsurface uncertainty using probabilistic modeling of life cycle production forecasts and cash flows

    International Nuclear Information System (INIS)

    Olatunbosun, O. O.

    1998-01-01

    The subject pertains to the implementation of the full range of subsurface uncertainties in life cycle probabilistic forecasting and its extension to project cash flows using the methodology of probabilities. A new tool has been developed in the probabilistic application of Crystal-Ball which can model reservoir volumetrics, life cycle production forecasts and project cash flows in a single environment. The tool is modular such that the volumetrics and cash flow modules are optional. Production forecasts are often generated by applying a decline equation to single best estimate values of input parameters such as initial potential, decline rate, abandonment rate etc -or sometimes by results of reservoir simulation. This new tool provides a means of implementing the full range of uncertainties and interdependencies of the input parameters into the production forecasts by defining the input parameters as probability density functions, PDFs and performing several iterations to generate an expectation curve forecast. Abandonment rate is implemented in each iteration via a link to an OPEX model. The expectation curve forecast is input into a cash flow model to generate a probabilistic NPV. Base case and sensitivity runs from reservoir simulation can likewise form the basis for a probabilistic production forecast from which a probabilistic cash flow can be generated. A good illustration of the application of this tool is in the modelling of the production forecast for a well that encounters its target reservoirs in OUT/ODT situation and thus has significant uncertainties. The uncertainty in presence and size (if present) of gas cap and dependency between ultimate recovery and initial potential amongst other uncertainties can be easily implemented in the production forecast with this tool. From the expectation curve forecast, a probabilistic NPV can be easily generated. Possible applications of this tool include: i. estimation of range of actual recoverable volumes based

  9. Propagating Water Quality Analysis Uncertainty Into Resource Management Decisions Through Probabilistic Modeling

    Science.gov (United States)

    Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.

    2007-12-01

    Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different

  10. Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning

    Science.gov (United States)

    Meshkat, Leila; Bryant, Larry

    2014-01-01

    Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.

  11. Effects of data sampling rate on image quality in fan-beam-CT system

    International Nuclear Information System (INIS)

    Iwata, Akira; Yamagishi, Nobutoshi; Suzumura, Nobuo; Horiba, Isao.

    1984-01-01

    Investigation was made into the relationship between spatial resolution or artifacts and data sampling rate in order to pursue the causes of the degradation of CT image quality by computer simulation. First the generation of projection data and reconstruction calculating process are described, and then the results are shown about the relation between angular sampling interval and spatical resolution or artifacts, and about the relation between projection data sampling interval and spatial resolution or artifacts. It was clarified that the formulation of the relationship between spatial resolution and data sampling rate performed so far for parallel X-ray beam was able to be applied to fan beam. As a conclusion, when other reconstruction parameters are the same in fan beam CT systems, spatial resolution can be determined by projection data sampling rate rather than angular sampling rate. The mechanism of artifact generation due to the insufficient number of angular samples was made clear. It was also made clear that there was a definite relationship among measuring region, angular sampling rate and projection data sampling rate, and the amount of artifacts depending upon projection data sampling rate was proportional to the amount of spatial frequency components (Aliasing components) of a test object above the Nyquist frequency of projection data. (Wakatsuki, Y.)

  12. Synthesizing biomolecule-based Boolean logic gates.

    Science.gov (United States)

    Miyamoto, Takafumi; Razavi, Shiva; DeRose, Robert; Inoue, Takanari

    2013-02-15

    One fascinating recent avenue of study in the field of synthetic biology is the creation of biomolecule-based computers. The main components of a computing device consist of an arithmetic logic unit, the control unit, memory, and the input and output devices. Boolean logic gates are at the core of the operational machinery of these parts, and hence to make biocomputers a reality, biomolecular logic gates become a necessity. Indeed, with the advent of more sophisticated biological tools, both nucleic acid- and protein-based logic systems have been generated. These devices function in the context of either test tubes or living cells and yield highly specific outputs given a set of inputs. In this review, we discuss various types of biomolecular logic gates that have been synthesized, with particular emphasis on recent developments that promise increased complexity of logic gate circuitry, improved computational speed, and potential clinical applications.

  13. Synthesizing Biomolecule-based Boolean Logic Gates

    Science.gov (United States)

    Miyamoto, Takafumi; Razavi, Shiva; DeRose, Robert; Inoue, Takanari

    2012-01-01

    One fascinating recent avenue of study in the field of synthetic biology is the creation of biomolecule-based computers. The main components of a computing device consist of an arithmetic logic unit, the control unit, memory, and the input and output devices. Boolean logic gates are at the core of the operational machinery of these parts, hence to make biocomputers a reality, biomolecular logic gates become a necessity. Indeed, with the advent of more sophisticated biological tools, both nucleic acid- and protein-based logic systems have been generated. These devices function in the context of either test tubes or living cells and yield highly specific outputs given a set of inputs. In this review, we discuss various types of biomolecular logic gates that have been synthesized, with particular emphasis on recent developments that promise increased complexity of logic gate circuitry, improved computational speed, and potential clinical applications. PMID:23526588

  14. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Directory of Open Access Journals (Sweden)

    Sebastian Höhna

    Full Text Available Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family and thus to maximize diversity (diversified sampling. So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa. The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa. Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model. Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species. All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear

  15. Complete ccc Boolean algebras, the order sequential topology, and a problem of von Neumann

    Czech Academy of Sciences Publication Activity Database

    Balcar, Bohuslav; Jech, Thomas; Pazák, Tomáš

    2005-01-01

    Roč. 37, č. 6 (2005), s. 885-898 ISSN 0024-6093 Institutional research plan: CEZ:AV0Z10750506; CEZ:AV0Z10190503 Keywords : Boolean algebras * Maharam submeasure * weak distributivity * independent reals Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005

  16. Influence of heart rate in nonlinear HRV indices as a sampling rate effect evaluated on supine and standing

    Directory of Open Access Journals (Sweden)

    Juan Bolea

    2016-11-01

    Full Text Available The purpose of this study is to characterize and attenuate the influence of mean heart rate (HR on nonlinear heart rate variability (HRV indices (correlation dimension, sample and approximate entropy as a consequence of being the HR the intrinsic sampling rate of HRV signal. This influence can notably alter nonlinear HRV indices and lead to biased information regarding autonomic nervous system (ANS modulation.First, a simulation study was carried out to characterize the dependence of nonlinear HRV indices on HR assuming similar ANS modulation. Second, two HR-correction approaches were proposed: one based on regression formulas and another one based on interpolating RR time series. Finally, standard and HR-corrected HRV indices were studied in a body position change database.The simulation study showed the HR-dependence of non-linear indices as a sampling rate effect, as well as the ability of the proposed HR-corrections to attenuate mean HR influence. Analysis in a body position changes database shows that correlation dimension was reduced around 21% in median values in standing with respect to supine position (p < 0.05, concomitant with a 28% increase in mean HR (p < 0.05. After HR-correction, correlation dimension decreased around 18% in standing with respect to supine position, being the decrease still significant. Sample and approximate entropy showed similar trends.HR-corrected nonlinear HRV indices could represent an improvement in their applicability as markers of ANS modulation when mean HR changes.

  17. Probabilistic linkage to enhance deterministic algorithms and reduce data linkage errors in hospital administrative data.

    Science.gov (United States)

    Hagger-Johnson, Gareth; Harron, Katie; Goldstein, Harvey; Aldridge, Robert; Gilbert, Ruth

    2017-06-30

     BACKGROUND: The pseudonymisation algorithm used to link together episodes of care belonging to the same patients in England (HESID) has never undergone any formal evaluation, to determine the extent of data linkage error. To quantify improvements in linkage accuracy from adding probabilistic linkage to existing deterministic HESID algorithms. Inpatient admissions to NHS hospitals in England (Hospital Episode Statistics, HES) over 17 years (1998 to 2015) for a sample of patients (born 13/28th of months in 1992/1998/2005/2012). We compared the existing deterministic algorithm with one that included an additional probabilistic step, in relation to a reference standard created using enhanced probabilistic matching with additional clinical and demographic information. Missed and false matches were quantified and the impact on estimates of hospital readmission within one year were determined. HESID produced a high missed match rate, improving over time (8.6% in 1998 to 0.4% in 2015). Missed matches were more common for ethnic minorities, those living in areas of high socio-economic deprivation, foreign patients and those with 'no fixed abode'. Estimates of the readmission rate were biased for several patient groups owing to missed matches, which was reduced for nearly all groups. CONCLUSION: Probabilistic linkage of HES reduced missed matches and bias in estimated readmission rates, with clear implications for commissioning, service evaluation and performance monitoring of hospitals. The existing algorithm should be modified to address data linkage error, and a retrospective update of the existing data would address existing linkage errors and their implications.

  18. Recent developments of the NESSUS probabilistic structural analysis computer program

    Science.gov (United States)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  19. Probabilistic Criterion for the Economical Assessment of Nuclear Reactors

    International Nuclear Information System (INIS)

    Juanico, L; Florido, Pablo; Bergallo, Juan

    2000-01-01

    In this paper a MonteCarlo probabilistic model for the economic evaluation of nuclear power plants is presented.The probabilistic results have shown a wide spread on the economic performance due to the schedule complexity and coupling if tasks.This spread increasing to the discount rate, end hence, it becomes more important for developing countries

  20. Feedback topology and XOR-dynamics in Boolean networks with varying input structure.

    Science.gov (United States)

    Ciandrini, L; Maffi, C; Motta, A; Bassetti, B; Cosentino Lagomarsino, M

    2009-08-01

    We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter gamma. We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying gamma , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.

  1. Boolean logic analysis for flow regime recognition of gas–liquid horizontal flow

    International Nuclear Information System (INIS)

    Ramskill, Nicholas P; Wang, Mi

    2011-01-01

    In order to develop a flowmeter for the accurate measurement of multiphase flows, it is of the utmost importance to correctly identify the flow regime present to enable the selection of the optimal method for metering. In this study, the horizontal flow of air and water in a pipeline was studied under a multitude of conditions using electrical resistance tomography but the flow regimes that are presented in this paper have been limited to plug and bubble air–water flows. This study proposes a novel method for recognition of the prevalent flow regime using only a fraction of the data, thus rendering the analysis more efficient. By considering the average conductivity of five zones along the central axis of the tomogram, key features can be identified, thus enabling the recognition of the prevalent flow regime. Boolean logic and frequency spectrum analysis has been applied for flow regime recognition. Visualization of the flow using the reconstructed images provides a qualitative comparison between different flow regimes. Application of the Boolean logic scheme enables a quantitative comparison of the flow patterns, thus reducing the subjectivity in the identification of the prevalent flow regime

  2. Feedback topology and XOR-dynamics in Boolean networks with varying input structure

    Science.gov (United States)

    Ciandrini, L.; Maffi, C.; Motta, A.; Bassetti, B.; Cosentino Lagomarsino, M.

    2009-08-01

    We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter γ . We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying γ , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.

  3. Using Vector and Extended Boolean Matching in an Expert System for Selecting Foster Homes.

    Science.gov (United States)

    Fox, Edward A.; Winett, Sheila G.

    1990-01-01

    Describes FOCES (Foster Care Expert System), a prototype expert system for choosing foster care placements for children which integrates information retrieval techniques with artificial intelligence. The use of prototypes and queries in Prolog routines, extended Boolean matching, and vector correlation are explained, as well as evaluation by…

  4. A probabilistic model of RNA conformational space

    DEFF Research Database (Denmark)

    Frellsen, Jes; Moltke, Ida; Thiim, Martin

    2009-01-01

    , the discrete nature of the fragments necessitates the use of carefully tuned, unphysical energy functions, and their non-probabilistic nature impairs unbiased sampling. We offer a solution to the sampling problem that removes these important limitations: a probabilistic model of RNA structure that allows...... conformations for 9 out of 10 test structures, solely using coarse-grained base-pairing information. In conclusion, the method provides a theoretical and practical solution for a major bottleneck on the way to routine prediction and simulation of RNA structure and dynamics in atomic detail.......The increasing importance of non-coding RNA in biology and medicine has led to a growing interest in the problem of RNA 3-D structure prediction. As is the case for proteins, RNA 3-D structure prediction methods require two key ingredients: an accurate energy function and a conformational sampling...

  5. Probabilistic insurance

    OpenAIRE

    Wakker, P.P.; Thaler, R.H.; Tversky, A.

    1997-01-01

    textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these preferences are intuitively appealing they are difficult to reconcile with expected utility theory. Under highly plausible assumptions about the utility function, willingness to pay for probabilistic i...

  6. An integrated probabilistic risk analysis decision support methodology for systems with multiple state variables

    International Nuclear Information System (INIS)

    Sen, P.; Tan, John K.G.; Spencer, David

    1999-01-01

    Probabilistic risk analysis (PRA) methods have been proven to be valuable in risk and reliability analysis. However, a weak link seems to exist between methods for analysing risks and those for making rational decisions. The integrated decision support system (IDSS) methodology presented in this paper attempts to address this issue in a practical manner. In consists of three phases: a PRA phase, a risk sensitivity analysis (SA) phase and an optimisation phase, which are implemented through an integrated computer software system. In the risk analysis phase the problem is analysed by the Boolean representation method (BRM), a PRA method that can deal with systems with multiple state variables and feedback loops. In the second phase the results obtained from the BRM are utilised directly to perform importance and risk SA. In the third phase, the problem is formulated as a multiple objective decision making problem in the form of multiple objective reliability optimisation. An industrial example is included. The resultant solutions of a five objective reliability optimisation are presented, on the basis of which rational decision making can be explored

  7. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  8. Representing Boolean Functions by Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    A Boolean or discrete function can be represented by a decision tree. A compact form of decision tree named binary decision diagram or branching program is widely known in logic design [2, 40]. This representation is equivalent to other forms, and in some cases it is more compact than values table or even the formula [44]. Representing a function in the form of decision tree allows applying graph algorithms for various transformations [10]. Decision trees and branching programs are used for effective hardware [15] and software [5] implementation of functions. For the implementation to be effective, the function representation should have minimal time and space complexity. The average depth of decision tree characterizes the expected computing time, and the number of nodes in branching program characterizes the number of functional elements required for implementation. Often these two criteria are incompatible, i.e. there is no solution that is optimal on both time and space complexity. © Springer-Verlag Berlin Heidelberg 2011.

  9. Graphene-based non-Boolean logic circuits

    Science.gov (United States)

    Liu, Guanxiong; Ahsan, Sonia; Khitun, Alexander G.; Lake, Roger K.; Balandin, Alexander A.

    2013-10-01

    Graphene revealed a number of unique properties beneficial for electronics. However, graphene does not have an energy band-gap, which presents a serious hurdle for its applications in digital logic gates. The efforts to induce a band-gap in graphene via quantum confinement or surface functionalization have not resulted in a breakthrough. Here we show that the negative differential resistance experimentally observed in graphene field-effect transistors of "conventional" design allows for construction of viable non-Boolean computational architectures with the gapless graphene. The negative differential resistance—observed under certain biasing schemes—is an intrinsic property of graphene, resulting from its symmetric band structure. Our atomistic modeling shows that the negative differential resistance appears not only in the drift-diffusion regime but also in the ballistic regime at the nanometer-scale—although the physics changes. The obtained results present a conceptual change in graphene research and indicate an alternative route for graphene's applications in information processing.

  10. Attractor-Based Obstructions to Growth in Homogeneous Cyclic Boolean Automata.

    Science.gov (United States)

    Khan, Bilal; Cantor, Yuri; Dombrowski, Kirk

    2015-11-01

    We consider a synchronous Boolean organism consisting of N cells arranged in a circle, where each cell initially takes on an independently chosen Boolean value. During the lifetime of the organism, each cell updates its own value by responding to the presence (or absence) of diversity amongst its two neighbours' values. We show that if all cells eventually take a value of 0 (irrespective of their initial values) then the organism necessarily has a cell count that is a power of 2. In addition, the converse is also proved: if the number of cells in the organism is a proper power of 2, then no matter what the initial values of the cells are, eventually all cells take on a value of 0 and then cease to change further. We argue that such an absence of structure in the dynamical properties of the organism implies a lack of adaptiveness, and so is evolutionarily disadvantageous. It follows that as the organism doubles in size (say from m to 2m) it will necessarily encounter an intermediate size that is a proper power of 2, and suffers from low adaptiveness. Finally we show, through computational experiments, that one way an organism can grow to more than twice its size and still avoid passing through intermediate sizes that lack structural dynamics, is for the organism to depart from assumptions of homogeneity at the cellular level.

  11. Probabilistic information on object weight shapes force dynamics in a grip-lift task.

    Science.gov (United States)

    Trampenau, Leif; Kuhtz-Buschbeck, Johann P; van Eimeren, Thilo

    2015-06-01

    Advance information, such as object weight, size and texture, modifies predictive scaling of grip forces in a grip-lift task. Here, we examined the influence of probabilistic advance information about object weight. Fifteen healthy volunteers repeatedly grasped and lifted an object equipped with a force transducer between their thumb and index finger. Three clearly distinguishable object weights were used. Prior to each lift, the probabilities for the three object weights were given by a visual cue. We examined the effect of probabilistic pre-cues on grip and lift force dynamics. We expected predictive scaling of grip force parameters to follow predicted values calculated according to probabilistic contingencies of the cues. We observed that probabilistic cues systematically influenced peak grip and load force rates, as an index of predictive motor scaling. However, the effects of probabilistic cues on force rates were nonlinear, and anticipatory adaptations of the motor output generally seemed to overestimate high probabilities and underestimate low probabilities. These findings support the suggestion that anticipatory adaptations and force scaling of the motor system can integrate probabilistic information. However, probabilistic information seems to influence motor programs in a nonlinear fashion.

  12. Bebop to the Boolean boogie an unconventional guide to electronics

    CERN Document Server

    Maxfield, Clive

    2003-01-01

    From reviews of the first edition:""If you want to be reminded of the joy of electronics, take a look at Clive (Max) Maxfield's book Bebop to the Boolean Boogie.""--Computer Design ""Lives up to its title as a useful and entertaining technical guide....well-suited for students, technical writers, technicians, and sales and marketing people.""--Electronic Design""Writing a book like this one takes audacity! ... Maxfield writes lucidly on a variety of complex topics without 'writing down' to his audience."" --EDN""A highly readable, well-illustrated guided tour

  13. Relative risk of probabilistic category learning deficits in patients with schizophrenia and their siblings

    Science.gov (United States)

    Weickert, Thomas W.; Goldberg, Terry E.; Egan, Michael F.; Apud, Jose A.; Meeter, Martijn; Myers, Catherine E.; Gluck, Mark A; Weinberger, Daniel R.

    2010-01-01

    Background While patients with schizophrenia display an overall probabilistic category learning performance deficit, the extent to which this deficit occurs in unaffected siblings of patients with schizophrenia is unknown. There are also discrepant findings regarding probabilistic category learning acquisition rate and performance in patients with schizophrenia. Methods A probabilistic category learning test was administered to 108 patients with schizophrenia, 82 unaffected siblings, and 121 healthy participants. Results Patients with schizophrenia displayed significant differences from their unaffected siblings and healthy participants with respect to probabilistic category learning acquisition rates. Although siblings on the whole failed to differ from healthy participants on strategy and quantitative indices of overall performance and learning acquisition, application of a revised learning criterion enabling classification into good and poor learners based on individual learning curves revealed significant differences between percentages of sibling and healthy poor learners: healthy (13.2%), siblings (34.1%), patients (48.1%), yielding a moderate relative risk. Conclusions These results clarify previous discrepant findings pertaining to probabilistic category learning acquisition rate in schizophrenia and provide the first evidence for the relative risk of probabilistic category learning abnormalities in unaffected siblings of patients with schizophrenia, supporting genetic underpinnings of probabilistic category learning deficits in schizophrenia. These findings also raise questions regarding the contribution of antipsychotic medication to the probabilistic category learning deficit in schizophrenia. The distinction between good and poor learning may be used to inform genetic studies designed to detect schizophrenia risk alleles. PMID:20172502

  14. The Concept of the "Imploded Boolean Search": A Case Study with Undergraduate Chemistry Students

    Science.gov (United States)

    Tomaszewski, Robert

    2016-01-01

    Critical thinking and analytical problem-solving skills in research involves using different search strategies. A proposed concept for an "Imploded Boolean Search" combines three unique identifiable field types to perform a search: keyword(s), numerical value(s), and a chemical structure or reaction. The object of this type of search is…

  15. Probabilistic pattern of risks in company’s Quality management system

    Directory of Open Access Journals (Sweden)

    Mager Vladimir

    2017-01-01

    Full Text Available One possible approach for calculation of probabilistic rate of risks in Quality management system (QMS, which is prescribed by requirements of International Standard ISO 9001:2015, is proposed in this article. Aspects of the theory of dependability and Markov techniques are used, which are applied for evaluation of probability of failures in complicated technical systems. Representation of QMS processes as a graph with controlled discrete Markov chains is suggested, which allows to evaluate a probability of customer requirements non-fulfillment as a function of an intensity of mistakes that bring to non-conformities in QMS.

  16. Attention as Inference: Selection Is Probabilistic; Responses Are All-or-None Samples

    Science.gov (United States)

    Vul, Edward; Hanus, Deborah; Kanwisher, Nancy

    2009-01-01

    Theories of probabilistic cognition postulate that internal representations are made up of multiple simultaneously held hypotheses, each with its own probability of being correct (henceforth, "probability distributions"). However, subjects make discrete responses and report the phenomenal contents of their mind to be all-or-none states rather than…

  17. Probabilistic Mobility Models for Mobile and Wireless Networks

    DEFF Research Database (Denmark)

    Song, Lei; Godskesen, Jens Christian

    2010-01-01

    In this paper we present a probabilistic broadcast calculus for mobile and wireless networks whose connections are unreliable. In our calculus broadcasted messages can be lost with a certain probability, and due to mobility the connection probabilities may change. If a network broadcasts a message...... from a location it will evolve to a network distribution depending on whether nodes at other locations receive the message or not. Mobility of locations is not arbitrary but guarded by a probabilistic mobility function (PMF) and we also define the notion of a weak bisimulation given a PMF...

  18. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  19. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  20. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  1. Boolean decision problems with competing interactions on scale-free networks: Equilibrium and nonequilibrium behavior in an external bias

    Science.gov (United States)

    Zhu, Zheng; Andresen, Juan Carlos; Moore, M. A.; Katzgraber, Helmut G.

    2014-02-01

    We study the equilibrium and nonequilibrium properties of Boolean decision problems with competing interactions on scale-free networks in an external bias (magnetic field). Previous studies at zero field have shown a remarkable equilibrium stability of Boolean variables (Ising spins) with competing interactions (spin glasses) on scale-free networks. When the exponent that describes the power-law decay of the connectivity of the network is strictly larger than 3, the system undergoes a spin-glass transition. However, when the exponent is equal to or less than 3, the glass phase is stable for all temperatures. First, we perform finite-temperature Monte Carlo simulations in a field to test the robustness of the spin-glass phase and show that the system has a spin-glass phase in a field, i.e., exhibits a de Almeida-Thouless line. Furthermore, we study avalanche distributions when the system is driven by a field at zero temperature to test if the system displays self-organized criticality. Numerical results suggest that avalanches (damage) can spread across the whole system with nonzero probability when the decay exponent of the interaction degree is less than or equal to 2, i.e., that Boolean decision problems on scale-free networks with competing interactions can be fragile when not in thermal equilibrium.

  2. Experimental Comparison of Gains in Achievable Information Rates from Probabilistic Shaping and Digital Backpropagation for DP-256QAM/1024QAM WDM Systems

    DEFF Research Database (Denmark)

    Porto da Silva, Edson; Yankov, Metodi Plamenov; Da Ros, Francesco

    2016-01-01

    Gains in achievable information rates from probabilistic shaping and digital backpropagation are compared for WDM transmission of 5 × 10 GBd DP-256QAM/1024QAM up to 1700 km of reach. The combination of both techniques its shown to provide gains of up to ∼0.5 bits/QAM symbol...

  3. Probabilistic Structural Analysis of SSME Turbopump Blades: Probabilistic Geometry Effects

    Science.gov (United States)

    Nagpal, V. K.

    1985-01-01

    A probabilistic study was initiated to evaluate the precisions of the geometric and material properties tolerances on the structural response of turbopump blades. To complete this study, a number of important probabilistic variables were identified which are conceived to affect the structural response of the blade. In addition, a methodology was developed to statistically quantify the influence of these probabilistic variables in an optimized way. The identified variables include random geometric and material properties perturbations, different loadings and a probabilistic combination of these loadings. Influences of these probabilistic variables are planned to be quantified by evaluating the blade structural response. Studies of the geometric perturbations were conducted for a flat plate geometry as well as for a space shuttle main engine blade geometry using a special purpose code which uses the finite element approach. Analyses indicate that the variances of the perturbations about given mean values have significant influence on the response.

  4. Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks

    Science.gov (United States)

    Benjamin, Allan; Dezfuli, Homayoon; Everett, Chris

    2015-01-01

    Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low-probability potential for high-consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.

  5. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  6. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  7. On the number of different dynamics in Boolean networks with deterministic update schedules.

    Science.gov (United States)

    Aracena, J; Demongeot, J; Fanchon, E; Montalva, M

    2013-04-01

    Deterministic Boolean networks are a type of discrete dynamical systems widely used in the modeling of genetic networks. The dynamics of such systems is characterized by the local activation functions and the update schedule, i.e., the order in which the nodes are updated. In this paper, we address the problem of knowing the different dynamics of a Boolean network when the update schedule is changed. We begin by proving that the problem of the existence of a pair of update schedules with different dynamics is NP-complete. However, we show that certain structural properties of the interaction diagraph are sufficient for guaranteeing distinct dynamics of a network. In [1] the authors define equivalence classes which have the property that all the update schedules of a given class yield the same dynamics. In order to determine the dynamics associated to a network, we develop an algorithm to efficiently enumerate the above equivalence classes by selecting a representative update schedule for each class with a minimum number of blocks. Finally, we run this algorithm on the well known Arabidopsis thaliana network to determine the full spectrum of its different dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Online probabilistic operational safety assessment of multi-mode engineering systems using Bayesian methods

    International Nuclear Information System (INIS)

    Lin, Yufei; Chen, Maoyin; Zhou, Donghua

    2013-01-01

    In the past decades, engineering systems become more and more complex, and generally work at different operational modes. Since incipient fault can lead to dangerous accidents, it is crucial to develop strategies for online operational safety assessment. However, the existing online assessment methods for multi-mode engineering systems commonly assume that samples are independent, which do not hold for practical cases. This paper proposes a probabilistic framework of online operational safety assessment of multi-mode engineering systems with sample dependency. To begin with, a Gaussian mixture model (GMM) is used to characterize multiple operating modes. Then, based on the definition of safety index (SI), the SI for one single mode is calculated. At last, the Bayesian method is presented to calculate the posterior probabilities belonging to each operating mode with sample dependency. The proposed assessment strategy is applied in two examples: one is the aircraft gas turbine, another is an industrial dryer. Both examples illustrate the efficiency of the proposed method

  9. Web-page Prediction for Domain Specific Web-search using Boolean Bit Mask

    OpenAIRE

    Sinha, Sukanta; Duttagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Search Engine is a Web-page retrieval tool. Nowadays Web searchers utilize their time using an efficient search engine. To improve the performance of the search engine, we are introducing a unique mechanism which will give Web searchers more prominent search results. In this paper, we are going to discuss a domain specific Web search prototype which will generate the predicted Web-page list for user given search string using Boolean bit mask.

  10. On Probabilistic Alpha-Fuzzy Fixed Points and Related Convergence Results in Probabilistic Metric and Menger Spaces under Some Pompeiu-Hausdorff-Like Probabilistic Contractive Conditions

    OpenAIRE

    De la Sen, M.

    2015-01-01

    In the framework of complete probabilistic metric spaces and, in particular, in probabilistic Menger spaces, this paper investigates some relevant properties of convergence of sequences to probabilistic α-fuzzy fixed points under some types of probabilistic contractive conditions.

  11. Probabilistic Insurance

    NARCIS (Netherlands)

    Wakker, P.P.; Thaler, R.H.; Tversky, A.

    1997-01-01

    Probabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in premium to compensate for a 1% default risk. These observations cannot be

  12. Probabilistic Insurance

    NARCIS (Netherlands)

    P.P. Wakker (Peter); R.H. Thaler (Richard); A. Tversky (Amos)

    1997-01-01

    textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these

  13. Complete CCC Boolean Algebras, the order Sequential Topology, and a Problem of von Neumann

    Czech Academy of Sciences Publication Activity Database

    Balcar, Bohuslav; Jech, Thomas; Pazák, Tomáš

    2005-01-01

    Roč. 37, č. 6 (2005), s. 885-898 ISSN 0024-6093 R&D Projects: GA ČR(CZ) GA201/02/0857; GA ČR(CZ) GA201/03/0933 Institutional research plan: CEZ:AV0Z10190503 Keywords : Boolean algebra * Maharam submeasure * weak distributivity Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005

  14. A generative, probabilistic model of local protein structure

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Mardia, Kanti V.; Taylor, Charles C.

    2008-01-01

    Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative...... conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence-structure correlations in the native state...

  15. State-Dependent Implication and Equivalence in Quantum Logic

    Directory of Open Access Journals (Sweden)

    Fedor Herbut

    2012-01-01

    Full Text Available Ideal occurrence of an event (projector leads to the known change of a state (density operator into (the Lüders state. It is shown that two events and give the same Lüders state if and only if the equivalence relation is valid. This relation determines equivalence classes. The set of them and each class, are studied in detail. It is proved that the range projector of the Lüders state can be evaluated as , where denotes the greatest lower bound, and is the null projector of . State-dependent implication extends absolute implication (which, in turn, determines the entire structure of quantum logic. and are investigated in a closely related way to mutual benefit. Inherent in the preorder is the state-dependent equivalence , defining equivalence classes in a given Boolean subalgebra. The quotient set, in which the classes are the elements, has itself a partially ordered structure, and so has each class. In a complete Boolean subalgebra, both structures are complete lattices. Physical meanings are discussed.

  16. The development and use of parametric sampling techniques for probabilistic risk assessment

    International Nuclear Information System (INIS)

    Dalrymple, G.J.; Broyd, T.W.

    1987-01-01

    In order to enable evaluation to be made of proposals for the underground disposal of low and intermediate level radioactive wastes in the United Kingdom, the Department of the Environment (DoE) research programme includes development of computer-based methods for use in a multistage assessment process. To test the adequacy of the various methods of data acquisitions and radiological assessment a mock assessment exercise is currently being conducted by the department. This paper outlines the proposed methodology which provides for the use of probabilistic modelling based upon the Atomic Energy of Canada Ltd SYVAC variability analysis approach using new models (SYVAC 'A') and data appropriate to UK conditions for a deep horizontal tunnel repository concept. This chapter describes the choice of a suitable technique for the sampling of data input to the SYVAC 'A' model and techniques for analysing the predictions of dose and risk made by the model. The sensitivity of the model predictions (risk and dose to man) to the input parameters was compared for four different methods. All four methods identified the same geological parameters as the most important. (author)

  17. Concentration dependence of fluorine impurity spin-lattice relaxation rate in bone mineral

    International Nuclear Information System (INIS)

    Code, R.F.; Armstrong, R.L.; Cheng, P.-T.

    1992-01-01

    The concentration dependence of the fluoride ion spin-lattice relaxation rate has been observed by nuclear magnetic resonance experiments on samples of defatted and dried bone. The 19 F spin-lattice relaxation rates increased linearly with bone fluoride concentration. Different results were obtained from trabecular than from cortical bone. For the same macroscopic fluoride content per gram of bone calcium, relaxation rate is significantly faster in cortical bone. Relaxation rates in cortical bone samples prepared from rats and dogs were apparently controlled by the same species-independent processes. For samples from beagle dogs, bulk fluoride concentrations measured by neutron activation analysis were 3.1±0.3 times greater in trabecular bone than in corresponding cortical bone. The beagle spin-lattice relaxation data suggest that microscopic fluoride concentrations in bone mineral were 1.8±0.4 times greater in trabecular bone than in cortical bone. It is concluded that accumulation of fluoride impurities in bone mineral is non-uniform. (author)

  18. probabilistic assessment of calcium carbonate export and dissolution in the modern ocean

    OpenAIRE

    Battaglia Gianna; Steinacher Marco; Joos Fortunat

    2016-01-01

    The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo sche...

  19. Boolean Algebra Application in Analysis of Flight Accidents

    Directory of Open Access Journals (Sweden)

    Casandra Venera BALAN

    2015-12-01

    Full Text Available Fault tree analysis is a deductive approach for resolving an undesired event into its causes, identifying the causes of a failure and providing a framework for a qualitative and quantitative evaluation of the top event. An alternative approach to fault tree analysis methods calculus goes to logical expressions and it is based on a graphical representation of the data structure for a logic - based binary decision diagram representation. In this analysis, such sites will be reduced to a minimal size and arranged in the sense that the variables appear in the same order in each path. An event can be defined as a statement that can be true or false. Therefore, Boolean algebra rules allow restructuring of a Fault Tree into one equivalent to it, but simpler.

  20. Unlimited multistability and Boolean logic in microbial signalling

    DEFF Research Database (Denmark)

    Kothamachu, Varun B; Feliu, Elisenda; Cardelli, Luca

    2015-01-01

    The ability to map environmental signals onto distinct internal physiological states or programmes is critical for single-celled microbes. A crucial systems dynamics feature underpinning such ability is multistability. While unlimited multistability is known to arise from multi-site phosphorylation...... seen in the signalling networks of eukaryotic cells, a similarly universal mechanism has not been identified in microbial signalling systems. These systems are generally known as two-component systems comprising histidine kinase (HK) receptors and response regulator proteins engaging in phosphotransfer...... further prove that sharing of downstream components allows a system with n multi-domain hybrid HKs to attain 3n steady states. We find that such systems, when sensing distinct signals, can readily implement Boolean logic functions on these signals. Using two experimentally studied examples of two...

  1. Incorporating psychological influences in probabilistic cost analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kujawski, Edouard; Alvaro, Mariana; Edwards, William

    2004-01-08

    Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations that are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for

  2. Seismic hazard in Hawaii: High rate of large earthquakes and probabilistics ground-motion maps

    Science.gov (United States)

    Klein, F.W.; Frankel, A.D.; Mueller, C.S.; Wesson, R.L.; Okubo, P.G.

    2001-01-01

    The seismic hazard and earthquake occurrence rates in Hawaii are locally as high as that near the most hazardous faults elsewhere in the United States. We have generated maps of peak ground acceleration (PGA) and spectral acceleration (SA) (at 0.2, 0.3 and 1.0 sec, 5% critical damping) at 2% and 10% exceedance probabilities in 50 years. The highest hazard is on the south side of Hawaii Island, as indicated by the MI 7.0, MS 7.2, and MI 7.9 earthquakes, which occurred there since 1868. Probabilistic values of horizontal PGA (2% in 50 years) on Hawaii's south coast exceed 1.75g. Because some large earthquake aftershock zones and the geometry of flank blocks slipping on subhorizontal decollement faults are known, we use a combination of spatially uniform sources in active flank blocks and smoothed seismicity in other areas to model seismicity. Rates of earthquakes are derived from magnitude distributions of the modem (1959-1997) catalog of the Hawaiian Volcano Observatory's seismic network supplemented by the historic (1868-1959) catalog. Modern magnitudes are ML measured on a Wood-Anderson seismograph or MS. Historic magnitudes may add ML measured on a Milne-Shaw or Bosch-Omori seismograph or MI derived from calibrated areas of MM intensities. Active flank areas, which by far account for the highest hazard, are characterized by distributions with b slopes of about 1.0 below M 5.0 and about 0.6 above M 5.0. The kinked distribution means that large earthquake rates would be grossly under-estimated by extrapolating small earthquake rates, and that longer catalogs are essential for estimating or verifying the rates of large earthquakes. Flank earthquakes thus follow a semicharacteristic model, which is a combination of background seismicity and an excess number of large earthquakes. Flank earthquakes are geometrically confined to rupture zones on the volcano flanks by barriers such as rift zones and the seaward edge of the volcano, which may be expressed by a magnitude

  3. Integration of Probabilistic Exposure Assessment and Probabilistic Hazard Characterization

    NARCIS (Netherlands)

    Voet, van der H.; Slob, W.

    2007-01-01

    A method is proposed for integrated probabilistic risk assessment where exposure assessment and hazard characterization are both included in a probabilistic way. The aim is to specify the probability that a random individual from a defined (sub)population will have an exposure high enough to cause a

  4. Probabilistic liver atlas construction.

    Science.gov (United States)

    Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E

    2017-01-13

    Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.

  5. Probabilistic Networks

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Lauritzen, Steffen Lilholt

    2001-01-01

    This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs.......This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs....

  6. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  7. Poisson-Like Spiking in Circuits with Probabilistic Synapses

    Science.gov (United States)

    Moreno-Bote, Rubén

    2014-01-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  8. Recurrent Neural Network Based Boolean Factor Analysis and its Application to Word Clustering

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2009-01-01

    Roč. 20, č. 7 (2009), s. 1073-1086 ISSN 1045-9227 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.889, year: 2009

  9. Probabilistically-Cued Patterns Trump Perfect Cues in Statistical Language Learning.

    Science.gov (United States)

    Lany, Jill; Gómez, Rebecca L

    2013-01-01

    Probabilistically-cued co-occurrence relationships between word categories are common in natural languages but difficult to acquire. For example, in English, determiner-noun and auxiliary-verb dependencies both involve co-occurrence relationships, but determiner-noun relationships are more reliably marked by correlated distributional and phonological cues, and appear to be learned more readily. We tested whether experience with co-occurrence relationships that are more reliable promotes learning those that are less reliable using an artificial language paradigm. Prior experience with deterministically-cued contingencies did not promote learning of less reliably-cued structure, nor did prior experience with relationships instantiated in the same vocabulary. In contrast, prior experience with probabilistically-cued co-occurrence relationships instantiated in different vocabulary did enhance learning. Thus, experience with co-occurrence relationships sharing underlying structure but not vocabulary may be an important factor in learning grammatical patterns. Furthermore, experience with probabilistically-cued co-occurrence relationships, despite their difficultly for naïve learners, lays an important foundation for learning novel probabilistic structure.

  10. Attaining the rate-independent limit of a rate-dependent strain gradient plasticity theory

    DEFF Research Database (Denmark)

    El-Naaman, Salim Abdallah; Nielsen, Kim Lau; Niordson, Christian Frithiof

    2016-01-01

    The existence of characteristic strain rates in rate-dependent material models, corresponding to rate-independent model behavior, is studied within a back stress based rate-dependent higher order strain gradient crystal plasticity model. Such characteristic rates have recently been observed...... for steady-state processes, and the present study aims to demonstrate that the observations in fact unearth a more widespread phenomenon. In this work, two newly proposed back stress formulations are adopted to account for the strain gradient effects in the single slip simple shear case, and characteristic...... rates for a selected quantity are identified through numerical analysis. Evidently, the concept of a characteristic rate, within the rate-dependent material models, may help unlock an otherwise inaccessible parameter space....

  11. Probabilistic Equilibrium Sampling of Protein Structures from SAXS Data and a Coarse Grained Debye Formula

    DEFF Research Database (Denmark)

    Andreetta, Christian

    -likelihood estimators for the form factors employed in the Debye formula, a theoretical forward model for SAXS profiles. The resulting computation compares favorably with the state of the art tool in the field, the program CRYSOL in the suite ATSAS. A faster, parallel implementation on Graphical Processor Units (GPUs......The present work describes the design and the implementation of a protocol for arbitrary precision computation of Small Angle X-ray Scattering (SAXS) profiles, and its inclusion in a probabilistic framework for protein structure determination. This protocol identifies a set of maximum...... of protein structures all fitting the experimental data. For the first time, we describe in full atomic detail a set of different conformations attainable by flexible polypeptides in solution. This method is not limited by assumptions in shape or size of the samples. It allows therefore to investigate...

  12. Multiaxial probabilistic elastic-plastic constitutive simulations of soils

    Science.gov (United States)

    Sadrinezhad, Arezoo

    Fokker-Planck-Kolmogorov (FPK) equation approach has recently been developed to simulate elastic-plastic constitutive behaviors of materials with uncertain material properties. The FPK equation approach transforms the stochastic constitutive rate equation, which is a stochastic, nonlinear, ordinary differential equation (ODE) in the stress-pseudo time space into a second-order accurate, deterministic, linear FPK partial differential equation (PDE) in the probability density of stress-pseudo time space. This approach does not suffer from the drawbacks of the traditional approaches such as the Monte Carlo approach and the perturbation approach for solving nonlinear ODEs with random coefficients. In this study, the existing one dimensional FPK framework for probabilistic constitutive modeling of soils is extended to multi--dimension. However, the multivariate FPK PDEs cannot be solved using the traditional mathematical techniques such as finite difference techniques due to their high computational cost. Therefore, computationally efficient algorithms based on the Fourier spectral approach are developed for solving a class of FPK PDEs that arises in probabilistic elasto-plasticity. This class includes linear FPK PDEs in (stress) space and (pseudo) time - having space-independent but time-dependent, and both space- and time-dependent coefficients - with impulse initial conditions and reflecting boundary conditions. The solution algorithms, rely on first mapping the stress space of the governing PDE between 0 and 2pi using the change of coordinates rule, followed by approximating the solution of the PDE in the 2pi-periodic domain by a finite Fourier series in the stress space and unknown time-dependent solution coefficients. Finally, the time-dependent solution coefficients are obtained from the initial condition. The accuracy and efficiency of the developed algorithms are tested. The developed algorithms are used to simulate uniaxial and multiaxial, monotonic and cyclic

  13. Probabilistic Analysis of Failures Mechanisms of Large Dams

    NARCIS (Netherlands)

    Shams Ghahfarokhi, G.

    2014-01-01

    Risk and reliability analysis is presently being performed in almost all fields of engineering depending upon the specific field and its particular area. Probabilistic risk analysis (PRA), also called quantitative risk analysis (QRA) is a central feature of hydraulic engineering structural design.

  14. Probabilistic assessment for a sample to be radioactive or not. Application to radioxenon analysis

    International Nuclear Information System (INIS)

    Vivier, A.; Le Petit, G.; Pigeon, B.; Blanchard, X.

    2009-01-01

    When a net count value is below the type 1 error critical limit it is customary to declare that the activity is 'below the detection limit'. The content of this declaration is particularly impoverished, incapable for example of discriminating between a net measurement just below the critical limit, but positive, and a negative net measurement, two types of information that it is legitimate and intuitive to think do not have the same weight of information. In the case of a spectral measurement of 131m Xe and 133m Xe certain information is available according to the various X and gamma emissions, which might all be below their respective critical limits. We shall see that a Bayesian probabilistic approach can be used, without considering the critical limits, to obtain anti-correlated maximum likelihood values taking all the information into account jointly and to obtain powerful and pertinent information in the form of the absolute probability that the sample contains 131m Xe and/or 133m Xe, all possible activity values combined. Conversely, of course, this is used immediately to deduce the probability that the sample does not contain 131m Xe and/or 133m Xe. This information enables the customary critical limit to be ignored. (author)

  15. Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis

    Science.gov (United States)

    Wang, Weiguang; Fu, Jianyu

    2018-02-01

    Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.

  16. Probabilistic Logical Characterization

    DEFF Research Database (Denmark)

    Hermanns, Holger; Parma, Augusto; Segala, Roberto

    2011-01-01

    Probabilistic automata exhibit both probabilistic and non-deterministic choice. They are therefore a powerful semantic foundation for modeling concurrent systems with random phenomena arising in many applications ranging from artificial intelligence, security, systems biology to performance...... modeling. Several variations of bisimulation and simulation relations have proved to be useful as means to abstract and compare different automata. This paper develops a taxonomy of logical characterizations of these relations on image-finite and image-infinite probabilistic automata....

  17. Probabilistic metric spaces

    CERN Document Server

    Schweizer, B

    2005-01-01

    Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.

  18. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  19. Probabilistic record linkage.

    Science.gov (United States)

    Sayers, Adrian; Ben-Shlomo, Yoav; Blom, Ashley W; Steele, Fiona

    2016-06-01

    Studies involving the use of probabilistic record linkage are becoming increasingly common. However, the methods underpinning probabilistic record linkage are not widely taught or understood, and therefore these studies can appear to be a 'black box' research tool. In this article, we aim to describe the process of probabilistic record linkage through a simple exemplar. We first introduce the concept of deterministic linkage and contrast this with probabilistic linkage. We illustrate each step of the process using a simple exemplar and describe the data structure required to perform a probabilistic linkage. We describe the process of calculating and interpreting matched weights and how to convert matched weights into posterior probabilities of a match using Bayes theorem. We conclude this article with a brief discussion of some of the computational demands of record linkage, how you might assess the quality of your linkage algorithm, and how epidemiologists can maximize the value of their record-linked research using robust record linkage methods. © The Author 2015; Published by Oxford University Press on behalf of the International Epidemiological Association.

  20. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan; Naous, Rawan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2015-01-01

    . Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards

  1. Security analysis of boolean algebra based on Zhang-Wang digital signature scheme

    International Nuclear Information System (INIS)

    Zheng, Jinbin

    2014-01-01

    In 2005, Zhang and Wang proposed an improvement signature scheme without using one-way hash function and message redundancy. In this paper, we show that this scheme exits potential safety concerns through the analysis of boolean algebra, such as bitwise exclusive-or, and point out that mapping is not one to one between assembly instructions and machine code actually by means of the analysis of the result of the assembly program segment, and which possibly causes safety problems unknown to the software

  2. Efficient Multi-Valued Bounded Model Checking for LTL over Quasi-Boolean Algebras

    OpenAIRE

    Andrade, Jefferson O.; Kameyama, Yukiyoshi

    2012-01-01

    Multi-valued Model Checking extends classical, two-valued model checking to multi-valued logic such as Quasi-Boolean logic. The added expressivity is useful in dealing with such concepts as incompleteness and uncertainty in target systems, while it comes with the cost of time and space. Chechik and others proposed an efficient reduction from multi-valued model checking problems to two-valued ones, but to the authors' knowledge, no study was done for multi-valued bounded model checking. In thi...

  3. Security analysis of boolean algebra based on Zhang-Wang digital signature scheme

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Jinbin, E-mail: jbzheng518@163.com [School of Mathematics and Computer Science, Long Yan University, Longyan 364012 (China)

    2014-10-06

    In 2005, Zhang and Wang proposed an improvement signature scheme without using one-way hash function and message redundancy. In this paper, we show that this scheme exits potential safety concerns through the analysis of boolean algebra, such as bitwise exclusive-or, and point out that mapping is not one to one between assembly instructions and machine code actually by means of the analysis of the result of the assembly program segment, and which possibly causes safety problems unknown to the software.

  4. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  5. Probabilistic Steady-State Operation and Interaction Analysis of Integrated Electricity, Gas and Heating Systems

    Directory of Open Access Journals (Sweden)

    Lun Yang

    2018-04-01

    Full Text Available The existing studies on probabilistic steady-state analysis of integrated energy systems (IES are limited to integrated electricity and gas networks or integrated electricity and heating networks. This paper proposes a probabilistic steady-state analysis of integrated electricity, gas and heating networks (EGH-IES. Four typical operation modes of an EGH-IES are presented at first. The probabilistic energy flow problem of the EGS-IES considering its operation modes and correlated uncertainties in wind/solar power and electricity/gas/heat loads is then formulated and solved by the Monte Carlo method based on Latin hypercube sampling and Nataf transformation. Numerical simulations are conducted on a sample EGH-IES working in the “electricity/gas following heat” mode to verify the probabilistic analysis proposed in this paper and to study the effects of uncertainties and correlations on the operation of the EGH-IES, especially uncertainty transmissions among the subnetworks.

  6. Disjunctive Probabilistic Modal Logic is Enough for Bisimilarity on Reactive Probabilistic Systems

    OpenAIRE

    Bernardo, Marco; Miculan, Marino

    2016-01-01

    Larsen and Skou characterized probabilistic bisimilarity over reactive probabilistic systems with a logic including true, negation, conjunction, and a diamond modality decorated with a probabilistic lower bound. Later on, Desharnais, Edalat, and Panangaden showed that negation is not necessary to characterize the same equivalence. In this paper, we prove that the logical characterization holds also when conjunction is replaced by disjunction, with negation still being not necessary. To this e...

  7. Probabilistic Tsunami Hazard Analysis

    Science.gov (United States)

    Thio, H. K.; Ichinose, G. A.; Somerville, P. G.; Polet, J.

    2006-12-01

    The recent tsunami disaster caused by the 2004 Sumatra-Andaman earthquake has focused our attention to the hazard posed by large earthquakes that occur under water, in particular subduction zone earthquakes, and the tsunamis that they generate. Even though these kinds of events are rare, the very large loss of life and material destruction caused by this earthquake warrant a significant effort towards the mitigation of the tsunami hazard. For ground motion hazard, Probabilistic Seismic Hazard Analysis (PSHA) has become a standard practice in the evaluation and mitigation of seismic hazard to populations in particular with respect to structures, infrastructure and lifelines. Its ability to condense the complexities and variability of seismic activity into a manageable set of parameters greatly facilitates the design of effective seismic resistant buildings but also the planning of infrastructure projects. Probabilistic Tsunami Hazard Analysis (PTHA) achieves the same goal for hazards posed by tsunami. There are great advantages of implementing such a method to evaluate the total risk (seismic and tsunami) to coastal communities. The method that we have developed is based on the traditional PSHA and therefore completely consistent with standard seismic practice. Because of the strong dependence of tsunami wave heights on bathymetry, we use a full waveform tsunami waveform computation in lieu of attenuation relations that are common in PSHA. By pre-computing and storing the tsunami waveforms at points along the coast generated for sets of subfaults that comprise larger earthquake faults, we can efficiently synthesize tsunami waveforms for any slip distribution on those faults by summing the individual subfault tsunami waveforms (weighted by their slip). This efficiency make it feasible to use Green's function summation in lieu of attenuation relations to provide very accurate estimates of tsunami height for probabilistic calculations, where one typically computes

  8. Convexity, gauge-dependence and tunneling rates

    Energy Technology Data Exchange (ETDEWEB)

    Plascencia, Alexis D.; Tamarit, Carlos [Institute for Particle Physics Phenomenology, Durham University,South Road, DH1 3LE (United Kingdom)

    2016-10-19

    We clarify issues of convexity, gauge-dependence and radiative corrections in relation to tunneling rates. Despite the gauge dependence of the effective action at zero and finite temperature, it is shown that tunneling and nucleation rates remain independent of the choice of gauge-fixing. Taking as a starting point the functional that defines the transition amplitude from a false vacuum onto itself, it is shown that decay rates are exactly determined by a non-convex, false vacuum effective action evaluated at an extremum. The latter can be viewed as a generalized bounce configuration, and gauge-independence follows from the appropriate Nielsen identities. This holds for any election of gauge-fixing that leads to an invertible Faddeev-Popov matrix.

  9. Convexity, gauge-dependence and tunneling rates

    International Nuclear Information System (INIS)

    Plascencia, Alexis D.; Tamarit, Carlos

    2016-01-01

    We clarify issues of convexity, gauge-dependence and radiative corrections in relation to tunneling rates. Despite the gauge dependence of the effective action at zero and finite temperature, it is shown that tunneling and nucleation rates remain independent of the choice of gauge-fixing. Taking as a starting point the functional that defines the transition amplitude from a false vacuum onto itself, it is shown that decay rates are exactly determined by a non-convex, false vacuum effective action evaluated at an extremum. The latter can be viewed as a generalized bounce configuration, and gauge-independence follows from the appropriate Nielsen identities. This holds for any election of gauge-fixing that leads to an invertible Faddeev-Popov matrix.

  10. Variances as order parameter and complexity measure for random Boolean networks

    International Nuclear Information System (INIS)

    Luque, Bartolo; Ballesteros, Fernando J; Fernandez, Manuel

    2005-01-01

    Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems

  11. Variances as order parameter and complexity measure for random Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Luque, Bartolo [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Ballesteros, Fernando J [Observatori Astronomic, Universitat de Valencia, Ed. Instituts d' Investigacio, Pol. La Coma s/n, E-46980 Paterna, Valencia (Spain); Fernandez, Manuel [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain)

    2005-02-04

    Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems.

  12. The development and application of an integrated radiological risk assessment procedure using time-dependent probabilistic risk analysis

    International Nuclear Information System (INIS)

    Laurens, J.M.; Thompson, B.G.J.; Sumerling, T.J.

    1990-01-01

    During the past decade, the UKDoE has funded the development of an integrated assessment procedure centred around probabilistic risk analysis (p.r.a.) using Monte Carlo simulation techniques to account for the effects of parameter value uncertainty, including those associated with temporal changes in the environment over a postclosure period of about one million years. The influence of these changes can now be incorporated explicitly into the p.r.a. simulator VANDAL (Variability ANalysis of Disposal ALternatives) briefly described here. Although a full statistically converged time-dependent p.r.a. will not be demonstrated until the current Dry Run 3 trial is complete, illustrative examples are given showing the ability of VANDAL to represent spatially complex groundwater and repository systems evolving under the influence of climatic change. 18 refs., 10 figs., 1 tab

  13. Von-Neumann and Beyond: Memristor Architectures

    KAUST Repository

    Naous, Rawan

    2017-01-01

    and probabilistic analysis for Boolean logic operators and correspondingly incorporate them into arithmetic blocks. Gate- and system-level accuracy of operation is presented to convey configurability and the different effects that the unreliability of the underlying

  14. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain

    Czech Academy of Sciences Publication Activity Database

    Frolov, A.; Húsek, Dušan; Polyakov, P.Y.

    2016-01-01

    Roč. 27, č. 3 (2016), s. 538-550 ISSN 2162-237X R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:67985807 Keywords : associative memory * bars problem (BP) * Boolean factor analysis (BFA) * data mining * dimension reduction * Hebbian learning rule * information gain * likelihood maximization (LM) * neural network application * recurrent neural network * statistics Subject RIV: IN - Informatics, Computer Science Impact factor: 6.108, year: 2016

  15. A probabilistic design method for LMFBR fuel rods

    International Nuclear Information System (INIS)

    Peck, S.O.; Lovejoy, W.S.

    1977-01-01

    Fuel rod performance analyses for design purposes are dependent upon material properties, dimensions, and loads that are statistical in nature. Conventional design practice accounts for the uncertainties in relevant parameters by designing to a 'safety factor', set so as to assure safe operation. Arbitrary assignment of these safety factors, based upon a number of 'worst case' assumptions, may result in costly over-design. Probabilistic design methods provide a systematic way to reflect the uncertainties in design parameters. PECS-III is a computer code which employs Monte Carlo techniques to generate the probability density and distribution functions for time-to-failure and cumulative damage for sealed plenum LMFBR fuel rods on a single rod or whole core basis. In Monte Carlo analyses, a deterministic model (that maps single-valued inputs into single-valued outputs) is coupled to a statistical 'driver'. Uncertainties in the input are reflected by assigning probability densities to the input parameters. Dependent input variables are considered multivariate normal. Independent input variables may be arbitrarily distributed. Sample values are drawn from these input densities, and a complete analysis is done by the deterministic model to generate a sample point in the output distribution. This process is repeated many times, and the number of times each output value occurs is accumulated. The probability that some measure of rod performance will fall within given limits is estimated by the relative frequency with which the Monte Carlo samples fall within tho

  16. Directional dependency of air sampling

    International Nuclear Information System (INIS)

    1994-01-01

    A field study was performed by Idaho State University-Environmental Monitoring Laboratory (EML) to examine the directional dependency of low-volume air samplers. A typical continuous low volume air sampler contains a sample head that is mounted on the sampler housing either horizontally through one of four walls or vertically on an exterior wall 'looking down or up.' In 1992, a field study was undertaken to estimate sampling error and to detect the directional effect of sampler head orientation. Approximately 1/2 mile downwind from a phosphate plant (continuous source of alpha activity), four samplers were positioned in identical orientation alongside one sampler configured with the sample head 'looking down'. At least five consecutive weekly samples were collected. The alpha activity, beta activity, and the Be-7 activity collected on the particulate filter were analyzed to determine sampling error. Four sample heads were than oriented to the four different horizontal directions. Samples were collected for at least five weeks. Analysis of the alpha data can show the effect of sampler orientation to a know near source term. Analysis of the beta and Be-7 activity shows the effect of sampler orientation to a ubiquitous source term

  17. Correlation Immunity, Avalanche Features, and Other Cryptographic Properties of Generalized Boolean Functions

    Science.gov (United States)

    2017-09-01

    satisfying the strict avalanche criterion,” Discrete Math ., vol. 185, pp. 29–39, 1998. [2] R.C. Bose, “On some connections between the design of... Discrete Appl. Math ., vol. 149, pp. 73–86, 2005. [11] T.W. Cusick and P. Stănică, Cryptographic Boolean Functions and Applications, 2nd ed., San Diego...Stănică, “Bisecting binomial coefficients,” Discrete Appl. Math ., vol. 227, pp. 70–83, 2017. [28] T. Martinsen, W. Meidl, and P. Stănică, “Generalized

  18. Duplicate Detection in Probabilistic Data

    NARCIS (Netherlands)

    Panse, Fabian; van Keulen, Maurice; de Keijzer, Ander; Ritter, Norbert

    2009-01-01

    Collected data often contains uncertainties. Probabilistic databases have been proposed to manage uncertain data. To combine data from multiple autonomous probabilistic databases, an integration of probabilistic data has to be performed. Until now, however, data integration approaches have focused

  19. Uncertainty and sensitivity analysis using probabilistic system assessment code. 1

    International Nuclear Information System (INIS)

    Honma, Toshimitsu; Sasahara, Takashi.

    1993-10-01

    This report presents the results obtained when applying the probabilistic system assessment code under development to the PSACOIN Level 0 intercomparison exercise organized by the Probabilistic System Assessment Code User Group in the Nuclear Energy Agency (NEA) of OECD. This exercise is one of a series designed to compare and verify probabilistic codes in the performance assessment of geological radioactive waste disposal facilities. The computations were performed using the Monte Carlo sampling code PREP and post-processor code USAMO. The submodels in the waste disposal system were described and coded with the specification of the exercise. Besides the results required for the exercise, further additional uncertainty and sensitivity analyses were performed and the details of these are also included. (author)

  20. Information rates of probabilistically shaped coded modulation for a multi-span fiber-optic communication system with 64QAM

    Science.gov (United States)

    Fehenberger, Tobias

    2018-02-01

    This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.

  1. Spatial-dependence recurrence sample entropy

    Science.gov (United States)

    Pham, Tuan D.; Yan, Hong

    2018-03-01

    Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.

  2. INCLUSION RATIO BASED ESTIMATOR FOR THE MEAN LENGTH OF THE BOOLEAN LINE SEGMENT MODEL WITH AN APPLICATION TO NANOCRYSTALLINE CELLULOSE

    Directory of Open Access Journals (Sweden)

    Mikko Niilo-Rämä

    2014-06-01

    Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.

  3. A Probabilistic Mass Estimation Algorithm for a Novel 7- Channel Capacitive Sample Verification Sensor

    Science.gov (United States)

    Wolf, Michael

    2012-01-01

    A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.

  4. Confluence reduction for probabilistic systems

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    In this presentation we introduce a novel technique for state space reduction of probabilistic specifications, based on a newly developed notion of confluence for probabilistic automata. We proved that this reduction preserves branching probabilistic bisimulation and can be applied on-the-fly. To

  5. Probabilistic systems coalgebraically: A survey

    Science.gov (United States)

    Sokolova, Ana

    2011-01-01

    We survey the work on both discrete and continuous-space probabilistic systems as coalgebras, starting with how probabilistic systems are modeled as coalgebras and followed by a discussion of their bisimilarity and behavioral equivalence, mentioning results that follow from the coalgebraic treatment of probabilistic systems. It is interesting to note that, for different reasons, for both discrete and continuous probabilistic systems it may be more convenient to work with behavioral equivalence than with bisimilarity. PMID:21998490

  6. Probabilistic biological network alignment.

    Science.gov (United States)

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Interactions between molecules are probabilistic events. An interaction may or may not happen with some probability, depending on a variety of factors such as the size, abundance, or proximity of the interacting molecules. In this paper, we consider the problem of aligning two biological networks. Unlike existing methods, we allow one of the two networks to contain probabilistic interactions. Allowing interaction probabilities makes the alignment more biologically relevant at the expense of explosive growth in the number of alternative topologies that may arise from different subsets of interactions that take place. We develop a novel method that efficiently and precisely characterizes this massive search space. We represent the topological similarity between pairs of aligned molecules (i.e., proteins) with the help of random variables and compute their expected values. We validate our method showing that, without sacrificing the running time performance, it can produce novel alignments. Our results also demonstrate that our method identifies biologically meaningful mappings under a comprehensive set of criteria used in the literature as well as the statistical coherence measure that we developed to analyze the statistical significance of the similarity of the functions of the aligned protein pairs.

  7. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR

    Directory of Open Access Journals (Sweden)

    Bochaton Audrey

    2007-06-01

    Full Text Available Abstract Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind

  8. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR).

    Science.gov (United States)

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-06-01

    Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be

  9. Realization of a quantum Hamiltonian Boolean logic gate on the Si(001):H surface.

    Science.gov (United States)

    Kolmer, Marek; Zuzak, Rafal; Dridi, Ghassen; Godlewski, Szymon; Joachim, Christian; Szymonski, Marek

    2015-08-07

    The design and construction of the first prototypical QHC (Quantum Hamiltonian Computing) atomic scale Boolean logic gate is reported using scanning tunnelling microscope (STM) tip-induced atom manipulation on an Si(001):H surface. The NOR/OR gate truth table was confirmed by dI/dU STS (Scanning Tunnelling Spectroscopy) tracking how the surface states of the QHC quantum circuit on the Si(001):H surface are shifted according to the input logical status.

  10. Conditional Probabilistic Population Forecasting

    OpenAIRE

    Sanderson, Warren C.; Scherbov, Sergei; O'Neill, Brian C.; Lutz, Wolfgang

    2004-01-01

    Since policy-makers often prefer to think in terms of alternative scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy-makers because...

  11. A Probabilistic Safety Assessment of a Pyro-processed Waste Repository

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Jeong, Jong Tae

    2012-01-01

    A GoldSim template program for a safety assessment of a hybrid-typed repository system, called A-KRS, in which two kinds of pyro-processed radioactive wastes, low-level metal wastes and ceramic high-level wastes that arise from the pyro-processing of PWR nuclear spent fuels are disposed of, has been developed. This program is ready both for a deterministic and probabilistic total system performance assessment which is able to evaluate nuclide release from the repository and farther transport into the geosphere and biosphere under various normal, disruptive natural and manmade events, and scenarios. The A-KRS has been probabilistically assessed with 9 selected input parameters, each of which has its own statistical distribution for a normal release and transport scenario associated with nuclide release and transport in and around the repository. Probabilistic dose exposure rates to the farming exposure group have been evaluated. A sensitivity of 9 selected parameters to the result has also been investigated to see which parameter is more sensitive and important to the exposure rates.

  12. Dependent interest and transition rates in life insurance

    DEFF Research Database (Denmark)

    Buchardt, Kristian

    2014-01-01

    For market consistent life insurance liabilities modelled with a multi-state Markov chain, it is of importance to consider the interest and transition rates as stochastic processes, for example in order to consider hedging possibilities of the risks, and for risk measurement. In the literature......, this is usually done with an assumption of independence between the interest and transition rates. In this paper, it is shown how to valuate life insurance liabilities using affine processes for modelling dependent interest and transition rates. This approach leads to the introduction of so-called dependent...... forward rates. We propose a specific model for surrender modelling, and within this model the dependent forward rates are calculated, and the market value and the Solvency II capital requirement are examined for a simple savings contract....

  13. Radon exhalation and its dependence on moisture content from samples of soil and building materials

    International Nuclear Information System (INIS)

    Faheem, Munazza; Matiullah

    2008-01-01

    Indoor radon has long been recognized as a potential health hazard for mankind. Building materials are considered as one of the major sources of radon in the indoor environment. To study radon exhalation rate and its dependence on moisture content, samples of soil and some common types of building materials (sand, cement, bricks and marble) were collected from Gujranwala, Gujrat, Hafizabad, Sialkot, Mandibahauddin and Narowal districts of the Punjab province (Pakistan). After processing, samples of 200 g each were placed in plastic vessels. CR-39 based NRPB detector were placed at the top of these vessels and were then hermetically sealed. After exposing to radon for 30 days within the closed vessels, the CR-39 detectors were processed. Radon exhalation rate was found to vary from 122±19 to 681±10mBqm -2 h -1 with an average of 376±147mBqm -2 h -1 in the soil samples whereas an average of 212±34, 195±25, 231±30 and 292±35mBqm -2 h -1 was observed in bricks, sand, cement and marble samples, respectively. Dependence of exhalation on moisture content has also been studied. Radon exhalation rate was found to increase with an increase in moisture, reached its maximum value and then decreased with further increase in the water content

  14. Probabilistic seismic hazards: Guidelines and constraints in evaluating results

    International Nuclear Information System (INIS)

    Sadigh, R.K.; Power, M.S.

    1989-01-01

    In conducting probabilistic seismic hazard analyses, consideration of the dispersion as well as the upper bounds on ground motion is of great significance. In particular, the truncation of ground motion levels at some upper limit would have a major influence on the computed hazard at the low-to-very-low probability levels. Additionally, other deterministic guidelines and constraints should be considered in evaluating the probabilistic seismic hazard results. In contrast to probabilistic seismic hazard evaluations, mean plus one standard deviation ground motions are typically used for deterministic estimates of ground motions from maximum events that may affect a structure. To be consistent with standard deterministic maximum estimates of ground motions values should be the highest level considered for the site. These maximum values should be associated with the largest possible event occurring at the site. Furthermore, the relationships between the ground motion level and probability of exceedance should reflect a transition from purely probabilistic assessments of ground motion at high probability levels where there are multiple chances for events to a deterministic upper bound ground motion at very low probability levels where there is very limited opportunity for maximum events to occur. In Interplate Regions, where the seismic sources may be characterized by a high-to-very-high rate of activity, the deterministic bounds will be approached or exceeded by the computer probabilistic hazard values at annual probability of exceedance levels typically as high as 10 -2 to 10 -3 . Thus, at these or lower values probability levels, probabilistically computed hazard values could be readily interpreted in the light of the deterministic constraints

  15. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  16. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  17. Conditional Probabilistic Population Forecasting

    OpenAIRE

    Sanderson, W.C.; Scherbov, S.; O'Neill, B.C.; Lutz, W.

    2003-01-01

    Since policy makers often prefer to think in terms of scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy makers it allows them to answer "what if"...

  18. Conditional probabilistic population forecasting

    OpenAIRE

    Sanderson, Warren; Scherbov, Sergei; O'Neill, Brian; Lutz, Wolfgang

    2003-01-01

    Since policy-makers often prefer to think in terms of alternative scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy-makers because it allows them...

  19. Time-Dependent Behaviors of Granite: Loading-Rate Dependence, Creep, and Relaxation

    Science.gov (United States)

    Hashiba, K.; Fukui, K.

    2016-07-01

    To assess the long-term stability of underground structures, it is important to understand the time-dependent behaviors of rocks, such as their loading-rate dependence, creep, and relaxation. However, there have been fewer studies on crystalline rocks than on tuff, mudstone, and rock salt, because the high strength of crystalline rocks makes the detection of their time-dependent behaviors much more difficult. Moreover, studies on the relaxation, temporal change of stress and strain (TCSS) conditions, and relations between various time-dependent behaviors are scarce for not only granites, but also other rocks. In this study, previous reports on the time-dependent behaviors of granites were reviewed and various laboratory tests were conducted using Toki granite. These tests included an alternating-loading-rate test, creep test, relaxation test, and TCSS test. The results showed that the degree of time dependence of Toki granite is similar to other granites, and that the TCSS resembles the stress-relaxation curve and creep-strain curve. A viscoelastic constitutive model, proposed in a previous study, was modified to investigate the relations between the time-dependent behaviors in the pre- and post-peak regions. The modified model reproduced the stress-strain curve, creep, relaxation, and the results of the TCSS test. Based on a comparison of the results of the laboratory tests and numerical simulations, close relations between the time-dependent behaviors were revealed quantitatively.

  20. DEVELOPING AN EXCELLENT SEDIMENT RATING CURVE FROM ONE HYDROLOGICAL YEAR SAMPLING PROGRAMME DATA: APPROACH

    Directory of Open Access Journals (Sweden)

    Preksedis M. Ndomba

    2008-01-01

    Full Text Available This paper presents preliminary findings on the adequacy of one hydrological year sampling programme data in developing an excellent sediment rating curve. The study case is a 1DD1 subcatchment in the upstream of Pangani River Basin (PRB, located in the North Eastern part of Tanzania. 1DD1 is the major runoff-sediment contributing tributary to the downstream hydropower reservoir, the Nyumba Ya Mungu (NYM. In literature sediment rating curve method is known to underestimate the actual sediment load. In the case of developing countries long-term sediment sampling monitoring or conservation campaigns have been reported as unworkable options. Besides, to the best knowledge of the authors, to date there is no consensus on how to develop an excellent rating curve. Daily-midway and intermittent-cross section sediment samples from Depth Integrating sampler (D-74 were used to calibrate the subdaily automatic sediment pumping sampler (ISCO 6712 near bank point samples for developing the rating curve. Sediment load correction factors were derived from both statistical bias estimators and actual sediment load approaches. It should be noted that the ongoing study is guided by findings of other studies in the same catchment. For instance, long term sediment yield rate estimated based on reservoir survey validated the performance of the developed rating curve. The result suggests that excellent rating curve could be developed from one hydrological year sediment sampling programme data. This study has also found that uncorrected rating curve underestimates sediment load. The degreeof underestimation depends on the type of rating curve developed and data used.

  1. The analysis of probability task completion; Taxonomy of probabilistic thinking-based across gender in elementary school students

    Science.gov (United States)

    Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi

    2017-08-01

    Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.

  2. Probabilistic safety assessment activities at Ignalina NPP

    International Nuclear Information System (INIS)

    Bagdonas, A.

    1999-01-01

    The Barselina Project was initiated in the summer 1991. The project was a multilateral co-operation between Lithuania, Russia and Sweden up until phase 3, and phase 4 has been performed as a bilateral between Lithuania and Sweden. The long-range objective is to establish common perspectives and unified bases for assessment of severe accident risks and needs for remedial measures for the RBMK reactors. During phase 3, from 1993 to 1994, a full scope Probabilistic Safety Analysis (PSA) model of the Ignalina Nuclear Power Plant unit 2 was developed to identify possible safety improvement of risk importance. The probabilistic methodology was applied on a plant specific basis for a channel type reactor of RBMK design. During phase 4, from 1994 to 1996, the PSA was further developed, taking into account plant changes, improved modelling methods and extended plant information concerning dependencies (area events, dynamic effects, electrical and signal dependencies). The model reflected the plant status before the outage 1996. During phase 4+, 1998 to 1999 the PSA model was upgraded taking into account the newest plant modifications. The new PSA model of CPS/AZRT was developed. Modelling was based on the Single Failure Analysis

  3. Programming Cell Adhesion for On-Chip Sequential Boolean Logic Functions.

    Science.gov (United States)

    Qu, Xiangmeng; Wang, Shaopeng; Ge, Zhilei; Wang, Jianbang; Yao, Guangbao; Li, Jiang; Zuo, Xiaolei; Shi, Jiye; Song, Shiping; Wang, Lihua; Li, Li; Pei, Hao; Fan, Chunhai

    2017-08-02

    Programmable remodelling of cell surfaces enables high-precision regulation of cell behavior. In this work, we developed in vitro constructed DNA-based chemical reaction networks (CRNs) to program on-chip cell adhesion. We found that the RGD-functionalized DNA CRNs are entirely noninvasive when interfaced with the fluidic mosaic membrane of living cells. DNA toehold with different lengths could tunably alter the release kinetics of cells, which shows rapid release in minutes with the use of a 6-base toehold. We further demonstrated the realization of Boolean logic functions by using DNA strand displacement reactions, which include multi-input and sequential cell logic gates (AND, OR, XOR, and AND-OR). This study provides a highly generic tool for self-organization of biological systems.

  4. A General Framework for Probabilistic Characterizing Formulae

    DEFF Research Database (Denmark)

    Sack, Joshua; Zhang, Lijun

    2012-01-01

    Recently, a general framework on characteristic formulae was proposed by Aceto et al. It offers a simple theory that allows one to easily obtain characteristic formulae of many non-probabilistic behavioral relations. Our paper studies their techniques in a probabilistic setting. We provide...... a general method for determining characteristic formulae of behavioral relations for probabilistic automata using fixed-point probability logics. We consider such behavioral relations as simulations and bisimulations, probabilistic bisimulations, probabilistic weak simulations, and probabilistic forward...

  5. Fully probabilistic design: the way for optimizing of concrete structures

    Directory of Open Access Journals (Sweden)

    I. Laníková

    Full Text Available Some standards for the design of concrete structures (e.g. EC2 and the original ČSN 73 1201-86 allow a structure to be designed by several methods. This contribution documents the fact that even if a structure does not comply with the partial reliability factor method, according to EC2, it can satisfy the conditions during the application of the fully probabilistic approach when using the same standard. From an example of the reliability of a prestressed spun concrete pole designed by the partial factor method and fully probabilistic approach according to the Eurocode it is evident that an expert should apply a more precise (though unfortunately more complicated method in the limiting cases. The Monte Carlo method, modified by the Latin Hypercube Sampling (LHS method, has been used for the calculation of reliability. Ultimate and serviceability limit states were checked for the partial factor method and fully probabilistic design. As a result of fully probabilistic design it is possible to obtain a more efficient design for a structure.

  6. Probabilistic predictive modelling of carbon nanocomposites for medical implants design.

    Science.gov (United States)

    Chua, Matthew; Chui, Chee-Kong

    2015-04-01

    Modelling of the mechanical properties of carbon nanocomposites based on input variables like percentage weight of Carbon Nanotubes (CNT) inclusions is important for the design of medical implants and other structural scaffolds. Current constitutive models for the mechanical properties of nanocomposites may not predict well due to differences in conditions, fabrication techniques and inconsistencies in reagents properties used across industries and laboratories. Furthermore, the mechanical properties of the designed products are not deterministic, but exist as a probabilistic range. A predictive model based on a modified probabilistic surface response algorithm is proposed in this paper to address this issue. Tensile testing of three groups of different CNT weight fractions of carbon nanocomposite samples displays scattered stress-strain curves, with the instantaneous stresses assumed to vary according to a normal distribution at a specific strain. From the probabilistic density function of the experimental data, a two factors Central Composite Design (CCD) experimental matrix based on strain and CNT weight fraction input with their corresponding stress distribution was established. Monte Carlo simulation was carried out on this design matrix to generate a predictive probabilistic polynomial equation. The equation and method was subsequently validated with more tensile experiments and Finite Element (FE) studies. The method was subsequently demonstrated in the design of an artificial tracheal implant. Our algorithm provides an effective way to accurately model the mechanical properties in implants of various compositions based on experimental data of samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. ENVIRONMENTAL DEPENDENCE OF THE GALAXY MERGER RATE IN A ΛCDM UNIVERSE

    International Nuclear Information System (INIS)

    Jian, Hung-Yu; Chiueh, Tzihong; Lin Lihwai

    2012-01-01

    We make use of four galaxy catalogs based on four different semi-analytical models (SAMs) implemented in the Millennium Simulation to study the environmental effects and the model dependence of the galaxy merger rate. We begin the analyses by finding that the galaxy merger rate in SAMs has a mild redshift evolution with luminosity-selected samples in the evolution-corrected B-band magnitude range,–21 ≤ M e B ≤ –19, consistent with the results of previous works. To study the environmental dependence of the galaxy merger rate, we adopt two estimators, the local overdensity (1 + δ n ), defined as the surface density from the nth nearest neighbor (n = 6 is chosen in this study), and the host halo mass M h . We find that the galaxy merger rate F mg shows a strong dependence on the local overdensity (1 + δ n ) and the dependence is similar at all redshifts. For the overdensity estimator, the merger rate F mg is found to be about twenty times larger in the densest regions than in underdense ones in two of the four SAMs, while it is roughly four times higher in the other two. In other words, the discrepancies of the merger rate difference between the two extremes can differ by a factor of ∼5 depending on the SAMs adopted. On the other hand, for the halo mass estimator, F mg does not monotonically increase with the host halo mass M h but peaks in the M h range between 10 12 and 10 13 h –1 M ☉ , which corresponds to group environments. The high merger rate in high local density regions corresponds primarily to the high merger rate in group environments. In addition, we also study the merger probability of 'close pairs' identified using the projected separation and the line-of-sight velocity difference C mg and the merger timescale T mg ; these are two important quantities for observations to convert the pair fraction N c into the galaxy merger rate. We discover that T mg has a weak dependence on environment and different SAMs, and is about 2 Gyr old at z

  8. A probabilistic Hu-Washizu variational principle

    Science.gov (United States)

    Liu, W. K.; Belytschko, T.; Besterfield, G. H.

    1987-01-01

    A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.

  9. Memristive Probabilistic Computing

    KAUST Repository

    Alahmadi, Hamzah

    2017-10-01

    In the era of Internet of Things and Big Data, unconventional techniques are rising to accommodate the large size of data and the resource constraints. New computing structures are advancing based on non-volatile memory technologies and different processing paradigms. Additionally, the intrinsic resiliency of current applications leads to the development of creative techniques in computations. In those applications, approximate computing provides a perfect fit to optimize the energy efficiency while compromising on the accuracy. In this work, we build probabilistic adders based on stochastic memristor. Probabilistic adders are analyzed with respect of the stochastic behavior of the underlying memristors. Multiple adder implementations are investigated and compared. The memristive probabilistic adder provides a different approach from the typical approximate CMOS adders. Furthermore, it allows for a high area saving and design exibility between the performance and power saving. To reach a similar performance level as approximate CMOS adders, the memristive adder achieves 60% of power saving. An image-compression application is investigated using the memristive probabilistic adders with the performance and the energy trade-off.

  10. On the role of the gas environment, electron-dose-rate, and sample on the image resolution in transmission electron microscopy

    DEFF Research Database (Denmark)

    Ek, Martin; Jespersen, Sebastian Pirel Fredsgaard; Damsgaard, Christian Danvad

    2016-01-01

    on the electron-dose-rate. In this article, we demonstrate that both the total and areal electron-dose-rates work as descriptors for the dose-rate-dependent resolution and are related through the illumination area. Furthermore, the resolution degradation was observed to occur gradually over time after......The introduction of gaseous atmospheres in transmission electron microscopy offers the possibility of studying materials in situ under chemically relevant environments. The presence of a gas environment can degrade the resolution. Surprisingly, this phenomenon has been shown to depend...... initializing the illumination of the sample and gas by the electron beam. The resolution was also observed to be sensitive to the electrical conductivity of the sample. These observations can be explained by a charge buildup over the electron-illuminated sample area, caused by the beam–gas–sample interaction...

  11. Construction of a fuzzy and all Boolean logic gates based on DNA

    DEFF Research Database (Denmark)

    M. Zadegan, Reza; Jepsen, Mette D E; Hildebrandt, Lasse

    2015-01-01

    to the operation of the six Boolean logic gates AND, NAND, OR, NOR, XOR, and XNOR. The logic gate complex is shown to work also when implemented in a three-dimensional DNA origami box structure, where it controlled the position of the lid in a closed or open position. Implementation of multiple microRNA sensitive...... DNA locks on one DNA origami box structure enabled fuzzy logical operation that allows biosensing of complex molecular signals. Integrating logic gates with DNA origami systems opens a vast avenue to applications in the fields of nanomedicine for diagnostics and therapeutics....

  12. Probabilistic data integration and computational complexity

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under

  13. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  14. Boolean modelling reveals new regulatory connections between transcription factors orchestrating the development of the ventral spinal cord.

    KAUST Repository

    Lovrics, Anna

    2014-11-14

    We have assembled a network of cell-fate determining transcription factors that play a key role in the specification of the ventral neuronal subtypes of the spinal cord on the basis of published transcriptional interactions. Asynchronous Boolean modelling of the network was used to compare simulation results with reported experimental observations. Such comparison highlighted the need to include additional regulatory connections in order to obtain the fixed point attractors of the model associated with the five known progenitor cell types located in the ventral spinal cord. The revised gene regulatory network reproduced previously observed cell state switches between progenitor cells observed in knock-out animal models or in experiments where the transcription factors were overexpressed. Furthermore the network predicted the inhibition of Irx3 by Nkx2.2 and this prediction was tested experimentally. Our results provide evidence for the existence of an as yet undescribed inhibitory connection which could potentially have significance beyond the ventral spinal cord. The work presented in this paper demonstrates the strength of Boolean modelling for identifying gene regulatory networks.

  15. Mathematical simulation of cascade-probabilistic functions for charged particles

    International Nuclear Information System (INIS)

    Kupchishin, A.A.; Kupchishin, A.I.; Smygaleva, T.A.

    1998-01-01

    Analytical expressions for cascade-probabilistic functions (CPF) for electrons, protons, α-particles and ions with taking into account energy losses are received. Mathematical analysis of these functions is carried out and main properties of function are determined. Algorithms of CPF are developed and their computer calculation were conducted. Regularities in behavior of function in dependence on initial particles energy, atomic number and registration depth are established. Book is intended to specialists on mathematical simulation of radiation defects, solid state physics, elementary particle physics and applied mathematics. There are 3 chapters in the book: 1. Cascade-probabilistic functions for electrons; 2. CPF for protons and α-particles; 3. CPF with taking unto account energy losses of ions. (author)

  16. Probabilistic Logic and Probabilistic Networks

    NARCIS (Netherlands)

    Haenni, R.; Romeijn, J.-W.; Wheeler, G.; Williamson, J.

    2009-01-01

    While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches

  17. Development and application of a living probabilistic safety assessment tool: Multi-objective multi-dimensional optimization of surveillance requirements in NPPs considering their ageing

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko; Gjorgiev, Blaže

    2014-01-01

    The benefits of utilizing the probabilistic safety assessment towards improvement of nuclear power plant safety are presented in this paper. Namely, a nuclear power plant risk reduction can be achieved by risk-informed optimization of the deterministically-determined surveillance requirements. A living probabilistic safety assessment tool for time-dependent risk analysis on component, system and plant level is developed. The study herein focuses on the application of this living probabilistic safety assessment tool as a computer platform for multi-objective multi-dimensional optimization of the surveillance requirements of selected safety equipment seen from the aspect of the risk-informed reasoning. The living probabilistic safety assessment tool is based on a newly developed model for calculating time-dependent unavailability of ageing safety equipment within nuclear power plants. By coupling the time-dependent unavailability model with a commercial software used for probabilistic safety assessment modelling on plant level, the frames of the new platform i.e. the living probabilistic safety assessment tool are established. In such way, the time-dependent core damage frequency is obtained and is further on utilized as first objective function within a multi-objective multi-dimensional optimization case study presented within this paper. The test and maintenance costs are designated as the second and the incurred dose due to performing the test and maintenance activities as the third objective function. The obtained results underline, in general, the usefulness and importance of a living probabilistic safety assessment, seen as a dynamic probabilistic safety assessment tool opposing the conventional, time-averaged unavailability-based, probabilistic safety assessment. The results of the optimization, in particular, indicate that test intervals derived as optimal differ from the deterministically-determined ones defined within the existing technical specifications

  18. Uncertainty analysis for probabilistic pipe fracture evaluations in LBB applications

    International Nuclear Information System (INIS)

    Rahman, S.; Ghadiali, N.; Wilkowski, G.

    1997-01-01

    During the NRC's Short Cracks in Piping and Piping Welds Program at Battelle, a probabilistic methodology was developed to conduct fracture evaluations of circumferentially cracked pipes for application to leak-rate detection. Later, in the IPIRG-2 program, several parameters that may affect leak-before-break and other pipe flaw evaluations were identified. This paper presents new results from several uncertainty analyses to evaluate the effects of normal operating stresses, normal plus safe-shutdown earthquake stresses, off-centered cracks, restraint of pressure-induced bending, and dynamic and cyclic loading rates on the conditional failure probability of pipes. systems in BWR and PWR. For each parameter, the sensitivity to conditional probability of failure and hence, its importance on probabilistic leak-before-break evaluations were determined

  19. Uncertainty analysis for probabilistic pipe fracture evaluations in LBB applications

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, S.; Ghadiali, N.; Wilkowski, G.

    1997-04-01

    During the NRC`s Short Cracks in Piping and Piping Welds Program at Battelle, a probabilistic methodology was developed to conduct fracture evaluations of circumferentially cracked pipes for application to leak-rate detection. Later, in the IPIRG-2 program, several parameters that may affect leak-before-break and other pipe flaw evaluations were identified. This paper presents new results from several uncertainty analyses to evaluate the effects of normal operating stresses, normal plus safe-shutdown earthquake stresses, off-centered cracks, restraint of pressure-induced bending, and dynamic and cyclic loading rates on the conditional failure probability of pipes. systems in BWR and PWR. For each parameter, the sensitivity to conditional probability of failure and hence, its importance on probabilistic leak-before-break evaluations were determined.

  20. Modeling bidirectional reflectance of forests and woodlands using Boolean models and geometric optics

    Science.gov (United States)

    Strahler, Alan H.; Jupp, David L. B.

    1990-01-01

    Geometric-optical discrete-element mathematical models for forest canopies have been developed using the Boolean logic and models of Serra. The geometric-optical approach is considered to be particularly well suited to describing the bidirectional reflectance of forest woodland canopies, where the concentration of leaf material within crowns and the resulting between-tree gaps make plane-parallel, radiative-transfer models inappropriate. The approach leads to invertible formulations, in which the spatial and directional variance provides the means for remote estimation of tree crown size, shape, and total cover from remotedly sensed imagery.

  1. Fast algorithm for probabilistic bone edge detection (FAPBED)

    Science.gov (United States)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean

  2. Sampling inspection for the evaluation of time-dependent reliability of deteriorating systems under imperfect defect detection

    International Nuclear Information System (INIS)

    Kuniewski, Sebastian P.; Weide, Johannes A.M. van der; Noortwijk, Jan M. van

    2009-01-01

    The paper presents a sampling-inspection strategy for the evaluation of time-dependent reliability of deteriorating systems, where the deterioration is assumed to initiate at random times and at random locations. After initiation, defects are weakening the system's resistance. The system becomes unacceptable when at least one defect reaches a critical depth. The defects are assumed to initiate at random times modeled as event times of a non-homogeneous Poisson process (NHPP) and to develop according to a non-decreasing time-dependent gamma process. The intensity rate of the NHPP is assumed to be a combination of a known time-dependent shape function and an unknown proportionality constant. When sampling inspection (i.e. inspection of a selected subregion of the system) results in a number of defect initiations, Bayes' theorem can be used to update prior beliefs about the proportionality constant of the NHPP intensity rate to the posterior distribution. On the basis of a time- and space-dependent Poisson process for the defect initiation, an adaptive Bayesian model for sampling inspection is developed to determine the predictive probability distribution of the time to failure. A potential application is, for instance, the inspection of a large vessel or pipeline suffering pitting/localized corrosion in the oil industry. The possibility of imperfect defect detection is also incorporated in the model.

  3. Probabilistic evaluations for CANTUP computer code analysis improvement

    International Nuclear Information System (INIS)

    Florea, S.; Pavelescu, M.

    2004-01-01

    Structural analysis with finite element method is today an usual way to evaluate and predict the behavior of structural assemblies subject to hard conditions in order to ensure their safety and reliability during their operation. A CANDU 600 fuel channel is an example of an assembly working in hard conditions, in which, except the corrosive and thermal aggression, long time irradiation, with implicit consequences on material properties evolution, interferes. That leads inevitably to material time-dependent properties scattering, their dynamic evolution being subject to a great degree of uncertainness. These are the reasons for developing, in association with deterministic evaluations with computer codes, the probabilistic and statistical methods in order to predict the structural component response. This work initiates the possibility to extend the deterministic thermomechanical evaluation on fuel channel components to probabilistic structural mechanics approach starting with deterministic analysis performed with CANTUP computer code which is a code developed to predict the long term mechanical behavior of the pressure tube - calandria tube assembly. To this purpose the structure of deterministic calculus CANTUP computer code has been reviewed. The code has been adapted from LAHEY 77 platform to Microsoft Developer Studio - Fortran Power Station platform. In order to perform probabilistic evaluations, it was added a part to the deterministic code which, using a subroutine from IMSL library from Microsoft Developer Studio - Fortran Power Station platform, generates pseudo-random values of a specified value. It was simulated a normal distribution around the deterministic value and 5% standard deviation for Young modulus material property in order to verify the statistical calculus of the creep behavior. The tube deflection and effective stresses were the properties subject to probabilistic evaluation. All the values of these properties obtained for all the values for

  4. Latent Profile Analysis of Schizotypy and Paranormal Belief: Associations with Probabilistic Reasoning Performance

    OpenAIRE

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew

    2018-01-01

    This study assessed the extent to which within-individual variation in schizotypy and paranormal belief influenced performance on probabilistic reasoning tasks. A convenience sample of 725 non-clinical adults completed measures assessing schizotypy (Oxford-Liverpool Inventory of Feelings and Experiences; O-Life brief), belief in the paranormal (Revised Paranormal Belief Scale; RPBS) and probabilistic reasoning (perception of randomness, conjunction fallacy, paranormal perception of randomness...

  5. Probabilistic programmable quantum processors

    International Nuclear Information System (INIS)

    Buzek, V.; Ziman, M.; Hillery, M.

    2004-01-01

    We analyze how to improve performance of probabilistic programmable quantum processors. We show how the probability of success of the probabilistic processor can be enhanced by using the processor in loops. In addition, we show that an arbitrary SU(2) transformations of qubits can be encoded in program state of a universal programmable probabilistic quantum processor. The probability of success of this processor can be enhanced by a systematic correction of errors via conditional loops. Finally, we show that all our results can be generalized also for qudits. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  6. Fisher information at the edge of chaos in random Boolean networks.

    Science.gov (United States)

    Wang, X Rosalind; Lizier, Joseph T; Prokopenko, Mikhail

    2011-01-01

    We study the order-chaos phase transition in random Boolean networks (RBNs), which have been used as models of gene regulatory networks. In particular we seek to characterize the phase diagram in information-theoretic terms, focusing on the effect of the control parameters (activity level and connectivity). Fisher information, which measures how much system dynamics can reveal about the control parameters, offers a natural interpretation of the phase diagram in RBNs. We report that this measure is maximized near the order-chaos phase transitions in RBNs, since this is the region where the system is most sensitive to its parameters. Furthermore, we use this study of RBNs to clarify the relationship between Shannon and Fisher information measures.

  7. Analyzing State Sequences with Probabilistic Suffix Trees: The PST R Package

    Directory of Open Access Journals (Sweden)

    Alexis Gabadinho

    2016-08-01

    Full Text Available This article presents the PST R package for categorical sequence analysis with probabilistic suffix trees (PSTs, i.e., structures that store variable-length Markov chains (VLMCs. VLMCs allow to model high-order dependencies in categorical sequences with parsimonious models based on simple estimation procedures. The package is specifically adapted to the field of social sciences, as it allows for VLMC models to be learned from sets of individual sequences possibly containing missing values; in addition, the package is extended to account for case weights. This article describes how a VLMC model is learned from one or more categorical sequences and stored in a PST. The PST can then be used for sequence prediction, i.e., to assign a probability to whole observed or artificial sequences. This feature supports data mining applications such as the extraction of typical patterns and outliers. This article also introduces original visualization tools for both the model and the outcomes of sequence prediction. Other features such as functions for pattern mining and artificial sequence generation are described as well. The PST package also allows for the computation of probabilistic divergence between two models and the fitting of segmented VLMCs, where sub-models fitted to distinct strata of the learning sample are stored in a single PST.

  8. HPV detection rate in saliva may depend on the immune system efficiency.

    Science.gov (United States)

    Adamopoulou, Maria; Vairaktaris, Eleftherios; Panis, Vassilis; Nkenke, Emeka; Neukam, Friedreich W; Yapijakis, Christos

    2008-01-01

    Human papilloma virus (HPV) has been established as a major etiological factor of anogenital cancer. In addition, HPV has also been implicated in oral carcinogenesis but its detection rates appear to be highly variable, depending on the patient population tested, the molecular methodology used, as well as the type of oral specimen investigated. For example, saliva is an oral fluid that may play a role in HPV transmission, although the detection rates of the virus are lower than tissue. Recent evidence has indicated that HPV-related pathology is increased in the oral cavity of human immunodeficiency virus (HIV)-positive individuals. In order to investigate whether the presence of different HPV types in saliva depends on immune system efficiency, oral fluid samples of patients with oral cancer and without any known immune deficiency were compared with those of HIV-positive individuals. Saliva samples were collected from 68 patients with oral squamous cell carcinoma and 34 HIV seropositive individuals. HPV DNA sequences were detected by L1 concensus polymerase chain reaction (PCR), followed by restriction fragment length polymorphism (RFLP) analysis and DNA sequencing for HPV typing. HPV DNA was detected in 7/68 (10.3%) of the oral cancer patients and in 12/34 (35.3%) of the HIV-positive individuals, a highly significant difference (p = 0.006; odds ratio 4.753; 95% confidence interval 1.698-13.271). Among HPV-positive samples, the prevalence of HPV types associated with high oncogenic risk was similar in oral cancer and HIV-positive cases (71.4% and 66.7%, respectively). In both groups, the most common HPV type was high-risk 16 (50% and 42.8%, respectively). Although a similar pattern of HPV high-risk types was detected in oral cancer and HIV-positive cases, the quantitative detection of HPV in saliva significantly depended on immune system efficiency. Furthermore, the significantly increased detection rates of HPV in saliva of HIV-positive individuals may be

  9. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp

    2012-11-01

    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  10. Event dependent sampling of recurrent events

    DEFF Research Database (Denmark)

    Kvist, Tine Kajsa; Andersen, Per Kragh; Angst, Jules

    2010-01-01

    The effect of event-dependent sampling of processes consisting of recurrent events is investigated when analyzing whether the risk of recurrence increases with event count. We study the situation where processes are selected for study if an event occurs in a certain selection interval. Motivation...... retrospective and prospective disease course histories are used. We examine two methods to correct for the selection depending on which data are used in the analysis. In the first case, the conditional distribution of the process given the pre-selection history is determined. In the second case, an inverse...

  11. Improved detection of chemical substances from colorimetric sensor data using probabilistic machine learning

    Science.gov (United States)

    Mølgaard, Lasse L.; Buus, Ole T.; Larsen, Jan; Babamoradi, Hamid; Thygesen, Ida L.; Laustsen, Milan; Munk, Jens Kristian; Dossi, Eleftheria; O'Keeffe, Caroline; Lässig, Lina; Tatlow, Sol; Sandström, Lars; Jakobsen, Mogens H.

    2017-05-01

    We present a data-driven machine learning approach to detect drug- and explosives-precursors using colorimetric sensor technology for air-sampling. The sensing technology has been developed in the context of the CRIM-TRACK project. At present a fully- integrated portable prototype for air sampling with disposable sensing chips and automated data acquisition has been developed. The prototype allows for fast, user-friendly sampling, which has made it possible to produce large datasets of colorimetric data for different target analytes in laboratory and simulated real-world application scenarios. To make use of the highly multi-variate data produced from the colorimetric chip a number of machine learning techniques are employed to provide reliable classification of target analytes from confounders found in the air streams. We demonstrate that a data-driven machine learning method using dimensionality reduction in combination with a probabilistic classifier makes it possible to produce informative features and a high detection rate of analytes. Furthermore, the probabilistic machine learning approach provides a means of automatically identifying unreliable measurements that could produce false predictions. The robustness of the colorimetric sensor has been evaluated in a series of experiments focusing on the amphetamine pre-cursor phenylacetone as well as the improvised explosives pre-cursor hydrogen peroxide. The analysis demonstrates that the system is able to detect analytes in clean air and mixed with substances that occur naturally in real-world sampling scenarios. The technology under development in CRIM-TRACK has the potential as an effective tool to control trafficking of illegal drugs, explosive detection, or in other law enforcement applications.

  12. PROBABILISTIC RELATIONAL MODELS OF COMPLETE IL-SEMIRINGS

    OpenAIRE

    Tsumagari, Norihiro

    2012-01-01

    This paper studies basic properties of probabilistic multirelations which are generalized the semantic domain of probabilistic systems and then provides two probabilistic models of complete IL-semirings using probabilistic multirelations. Also it is shown that these models need not be models of complete idempotentsemirings.

  13. Probabilistic safety analysis vs probabilistic fracture mechanics -relation and necessary merging

    International Nuclear Information System (INIS)

    Nilsson, Fred

    1997-01-01

    A comparison is made between some general features of probabilistic fracture mechanics (PFM) and probabilistic safety assessment (PSA) in its standard form. We conclude that: Result from PSA is a numerically expressed level of confidence in the system based on the state of current knowledge. It is thus not any objective measure of risk. It is important to carefully define the precise nature of the probabilistic statement and relate it to a well defined situation. Standardisation of PFM methods is necessary. PFM seems to be the only way to obtain estimates of the pipe break probability. Service statistics are of doubtful value because of scarcity of data and statistical inhomogeneity. Collection of service data should be directed towards the occurrence of growing cracks

  14. Logical Attractors: a Boolean Approach to the Dynamics of Psychosis

    Science.gov (United States)

    Kupper, Z.; Hoffmann, H.

    A Boolean modeling approach to attractors in the dynamics of psychosis is presented: Kinetic Logic, originating from R. Thomas, describes systems on an intermediate level between a purely verbal, qualitative description and a description using nonlinear differential equations. With this method we may model impact, feedback and temporal evolution, as well as analyze the resulting attractors. In our previous research the method has been applied to general and more specific questions in the dynamics of psychotic disorders. In this paper a model is introduced that describes different dynamical patterns of chronic psychosis in the context of vocational rehabilitation. It also shows to be useful in formulating and exploring possible treatment strategies. Finally, some of the limitations and benefits of Kinetic Logic as a modeling tool for psychology and psychiatry are discussed.

  15. Nuclear fuel cycle cost analysis using a probabilistic simulation technique

    International Nuclear Information System (INIS)

    Won, Il Ko; Jong, Won Choi; Chul, Hyung Kang; Jae, Sol Lee; Kun, Jai Lee

    1998-01-01

    A simple approach was described to incorporate the Monte Carlo simulation technique into a fuel cycle cost estimate. As a case study, the once-through and recycle fuel cycle options were tested with some alternatives (ie. the change of distribution type for input parameters), and the simulation results were compared with the values calculated by a deterministic method. A three-estimate approach was used for converting cost inputs into the statistical parameters of assumed probabilistic distributions. It was indicated that the Monte Carlo simulation by a Latin Hypercube Sampling technique and subsequent sensitivity analyses were useful for examining uncertainty propagation of fuel cycle costs, and could more efficiently provide information to decisions makers than a deterministic method. It was shown from the change of distribution types of input parameters that the values calculated by the deterministic method were set around a 40 th ∼ 50 th percentile of the output distribution function calculated by probabilistic simulation. Assuming lognormal distribution of inputs, however, the values calculated by the deterministic method were set around an 85 th percentile of the output distribution function calculated by probabilistic simulation. It was also indicated from the results of the sensitivity analysis that the front-end components were generally more sensitive than the back-end components, of which the uranium purchase cost was the most important factor of all. It showed, also, that the discount rate made many contributions to the fuel cycle cost, showing the rank of third or fifth of all components. The results of this study could be useful in applications to another options, such as the Dcp (Direct Use of PWR spent fuel In Candu reactors) cycle with high cost uncertainty

  16. Probabilistic methods for rotordynamics analysis

    Science.gov (United States)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  17. A 'cost-effective' probabilistic model to select the dominant factors affecting the variation of the component failure rate

    International Nuclear Information System (INIS)

    Kirchsteiger, C.

    1992-11-01

    Within the framework of a Probabilistic Safety Assessment (PSA), the component failure rate λ is a key parameter in the sense that the study of its behavior gives the essential information for estimating the current values as well as the trends in the failure probabilities of interest. Since there is an infinite variety of possible underlying factors which might cause changes in λ (e.g. operating time, maintenance practices, component environment, etc.), an 'importance ranking' process of these factors is considered most desirable to prioritize research efforts. To be 'cost-effective', the modeling effort must be small, i.e. essentially involving no estimation of additional parameters other than λ. In this paper, using a multivariate data analysis technique and various statistical measures, such a 'cost-effective' screening process has been developed. Dominant factors affecting the failure rate of any components of interest can easily be identified and the appropriateness of current research plans (e.g. on the necessity of performing aging studies) can be validated. (author)

  18. Probabilistic Models for Solar Particle Events

    Science.gov (United States)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  19. Operational intervention levels in a nuclear emergency, general concepts and a probabilistic approach

    International Nuclear Information System (INIS)

    Lauritzen, B.; Baeverstam, U.; Naadland Holo, E.; Sinkko, K.

    1997-12-01

    This report deals with Operational Intervention Levels (OILs) in a nuclear or radiation emergency. OILs are defined as the values of environmental measurements, in particular dose rate measurements, above which specific protective actions should be carried out in emergency exposure situations. The derivation and the application of OILs are discussed, and an overview of the presently adopted values is provided, with emphasis on the situation in the Nordic countries. A new, probabilistic approach to derive OILs is presented and the method is illustrated by calculating dose rate OILs in a simplified setting. Contrary to the standard approach, the probabilistic approach allows for optimization of OILs. It is argued, that optimized OILs may be much larger than the presently adopted or suggested values. It is recommended, that the probabilistic approach is further developed and employed in determining site specific OILs and in optimizing environmental measuring strategies. (au)

  20. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  1. Probabilistic Structural Analysis Theory Development

    Science.gov (United States)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  2. Ultrahigh-Spectral-Efficiency WDM/SDM Transmission Using PDM-1024-QAM Probabilistic Shaping With Adaptive Rate

    DEFF Research Database (Denmark)

    Hu, Hao; Yankov, Metodi Plamenov; Da Ros, Francesco

    2018-01-01

    We demonstrate wavelength-division-multiplexed (WDM) and space-division-multiplexed (SDM) transmission of probabilistically shaped polarization-division-multiplexed (PDM) 1024-state quadrature amplitude modulation (QAM) channels over a 9.7-km single-mode 30-core fiber, achieving aggregated spectr...

  3. 14th International Probabilistic Workshop

    CERN Document Server

    Taerwe, Luc; Proske, Dirk

    2017-01-01

    This book presents the proceedings of the 14th International Probabilistic Workshop that was held in Ghent, Belgium in December 2016. Probabilistic methods are currently of crucial importance for research and developments in the field of engineering, which face challenges presented by new materials and technologies and rapidly changing societal needs and values. Contemporary needs related to, for example, performance-based design, service-life design, life-cycle analysis, product optimization, assessment of existing structures and structural robustness give rise to new developments as well as accurate and practically applicable probabilistic and statistical engineering methods to support these developments. These proceedings are a valuable resource for anyone interested in contemporary developments in the field of probabilistic engineering applications.

  4. On the effectiveness of surface assimilation in probabilistic nowcasts of planetary boundary layer profiles

    Science.gov (United States)

    Rostkier-Edelstein, Dorita; Hacker, Joshua

    2013-04-01

    Surface observations comprise a wide, non-expensive and reliable source of information about the state of the near-surface planetary boundary layer (PBL). Operational data assimilation systems have encountered several difficulties in effectively assimilating them, among others due to their local-scale representativeness, the transient coupling between the surface and the atmosphere aloft and the balance constraints usually used. A long-term goal of this work is to find an efficient system for probabilistic PBL nowcasting that can be employed wherever surface observations are present. Earlier work showed that surface observations can be an important source of information with a single column model (SCM) and an ensemble filter (EF). Here we extend that work to quantify the probabilistic skill of ensemble SCM predictions with a model including added complexity. We adopt a factor separation analysis to quantify the contribution of surface assimilation relative to that of selected model components (parameterized radiation and externally imposed horizontal advection) to the probabilistic skill of the system, and of any beneficial or detrimental interactions between them. To assess the real utility of the flow-dependent covariances estimated with the EF and of the SCM of the PBL we compare the skill of the SCM/EF system to that of a reference one based on climatological covariances and a 30-min persistence model. It consists of a dressing technique, whereby a deterministic 3D mesoscale forecast (e.g. from WRF model) is adjusted and dressed with uncertainty using a seasonal sample of mesoscale forecasts and surface forecast errors. Results show that assimilation of surface observations can improve deterministic and probabilistic profile predictions more significantly than major model improvements. Flow-dependent covariances estimated with the SCM/EF show clear advantage over the use of climatological covariances when the flow is characterized by wide variability, when

  5. Convolution product construction of interactions in probabilistic physical models

    International Nuclear Information System (INIS)

    Ratsimbarison, H.M.; Raboanary, R.

    2007-01-01

    This paper aims to give a probabilistic construction of interactions which may be relevant for building physical theories such as interacting quantum field theories. We start with the path integral definition of partition function in quantum field theory which recall us the probabilistic nature of this physical theory. From a Gaussian law considered as free theory, an interacting theory is constructed by nontrivial convolution product between the free theory and an interacting term which is also a probability law. The resulting theory, again a probability law, exhibits two proprieties already present in nowadays theories of interactions such as Gauge theory : the interaction term does not depend on the free term, and two different free theories can be implemented with the same interaction.

  6. Probabilistic aspects of harmonic emission of large offshore wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Christian F. [Energinet.dk, Fredericia (Denmark); Bak, Claus L. [Aalborg Univ. (Denmark). Dept. of Energy Technology; Kocewiak, Lukasz; Hjerrild, Jesper [DONG Energy, Skaerbaek (Denmark); Berthelsen, Kasper K. [Aalborg Univ. (Denmark). Dept. of Mathematical Sciences

    2011-07-01

    In this article, a new probabilistic method of assessment of harmonic emission of large offshore wind farms is presented. Based on measurements from the British wind farm Burbo Banks, probability density functions are estimated for the dominating low order harmonic currents injected by a single turbine. The degree and type of dependence between the harmonic emission and the operating point of a single turbine is established. A model of Burbo Banks, suitable for harmonic load flow studies, is created in DIgSILENT Power Factory along with a DPL-script that deals with the probabilistic issues of the harmonic emission. The simulated harmonic distortion at the PCC is compared to measurement. This reveals some difficulties regarding harmonic load flow studies. The harmonic background distortion in the grid to where the wind farms is connected must be included in the study. Furthermore, a very detailed representation of the frequency dependent short circuit impedance must be used before sufficiently accurate results can be obtained from the model. (orig.)

  7. Systems dependability assessment

    CERN Document Server

    Aubry, Jean-François

    2015-01-01

    Presents recent developments of probabilistic assessment of systems dependability based on stochastic models, including graph theory, finite state automaton and language theory, for both dynamic and hybrid contexts.

  8. Origin and Elimination of Two Global Spurious Attractors in Hopfield-Like Neural Network Performing Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2010-01-01

    Roč. 73, č. 7-9 (2010), s. 1394-1404 ISSN 0925-2312 R&D Projects: GA ČR GA205/09/1079; GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean factor analysis * Hopfield neural Network * unsupervised learning * dimension reduction * data mining Subject RIV: IN - Informatics, Computer Science Impact factor: 1.429, year: 2010

  9. Qubits and quantum Hamiltonian computing performances for operating a digital Boolean 1/2-adder

    Science.gov (United States)

    Dridi, Ghassen; Faizy Namarvar, Omid; Joachim, Christian

    2018-04-01

    Quantum Boolean (1 + 1) digits 1/2-adders are designed with 3 qubits for the quantum computing (Qubits) and 4 quantum states for the quantum Hamiltonian computing (QHC) approaches. Detailed analytical solutions are provided to analyse the time operation of those different 1/2-adder gates. QHC is more robust to noise than Qubits and requires about the same amount of energy for running its 1/2-adder logical operations. QHC is faster in time than Qubits but its logical output measurement takes longer.

  10. On Attributes, Roles, and Dependencies in Description Logics and the Ackermann Case of the Decision Problem

    DEFF Research Database (Denmark)

    Toman, David; Weddel, Grant Edwin

    2001-01-01

    We present a decision procedure for the logical implication problem of a boolean complete DL dialect that includes attributes roles inverse roles and a new concept constructor that is capable of expressing a variety of equality and order generating dependencies The procedure underlies a mapping o...

  11. Some probabilistic aspects of fracture

    International Nuclear Information System (INIS)

    Thomas, J.M.

    1982-01-01

    Some probabilistic aspects of fracture in structural and mechanical components are examined. The principles of fracture mechanics, material quality and inspection uncertainty are formulated into a conceptual and analytical framework for prediction of failure probability. The role of probabilistic fracture mechanics in a more global context of risk and optimization of decisions is illustrated. An example, where Monte Carlo simulation was used to implement a probabilistic fracture mechanics analysis, is discussed. (orig.)

  12. Strain-rate dependent plasticity in thermo-mechanical transient analysis

    International Nuclear Information System (INIS)

    Rashid, Y.R.; Sharabi, M.N.

    1980-01-01

    The thermo-mechanical transient behavior of fuel element cladding and other reactor components is generally governed by the strain-rate properties of the material. Relevant constitutive modeling requires extensive material data in the form of strain-rate response as function of true-stress, temperature, time and environmental conditions, which can then be fitted within a theoretical framework of an inelastic constitutive model. In this paper, we present a constitutive formulation that deals continuously with the entire strain-rate range and has the desirable advantage of utilizing existing material data. The derivation makes use of strain-rate sensitive stress-strain curve and strain-rate dependent yield surface. By postulating a strain-rate dependent on Mises yield function and a strain-rate dependent kinematic hardening rule, we are able to derive incremental stress-strain relations that describe the strain-rate behavior in the entire deformation range spanning high strain-rate plasticity and creep. The model is sufficiently general as to apply to any materials and loading histories for which data is available. (orig.)

  13. THE METHOD OF CONSTRUCTING A BOOLEAN FORMULA OF A POLYGON IN THE DISJUNCTIVE NORMAL FORM

    Directory of Open Access Journals (Sweden)

    A. A. Butov

    2014-01-01

    Full Text Available The paper focuses on finalizing the method of finding a polygon Boolean formula in disjunctive normal form, described in the previous article [1]. An improved method eliminates the drawback asso-ciated with the existence of a class of problems for which the solution is only approximate. The pro-posed method always allows to find an exact solution. The method can be used, in particular, in the systems of computer-aided design of integrated circuits topology.

  14. Stability analysis for discrete-time stochastic memristive neural networks with both leakage and probabilistic delays.

    Science.gov (United States)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E

    2018-06-01

    This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. The role of linguistic experience in the processing of probabilistic information in production.

    Science.gov (United States)

    Gustafson, Erin; Goldrick, Matthew

    2018-01-01

    Speakers track the probability that a word will occur in a particular context and utilize this information during phonetic processing. For example, content words that have high probability within a discourse tend to be realized with reduced acoustic/articulatory properties. Such probabilistic information may influence L1 and L2 speech processing in distinct ways (reflecting differences in linguistic experience across groups and the overall difficulty of L2 speech processing). To examine this issue, L1 and L2 speakers performed a referential communication task, describing sequences of simple actions. The two groups of speakers showed similar effects of discourse-dependent probabilistic information on production, suggesting that L2 speakers can successfully track discourse-dependent probabilities and use such information to modulate phonetic processing.

  16. Predictors of social anxiety in an opioid dependent sample and a control sample

    OpenAIRE

    Shand, Fiona L.; Degenhardt, Louisa; Nelson, Elliot C.; Mattick, Richard P.

    2010-01-01

    Compared to other mental health problems, social anxiety is under-acknowledged amongst opioid dependent populations. This study aimed to assess levels of social anxiety and identify its predictors in an opioid dependent sample and a matched control group. Opioid dependent participants (n = 1385) and controls (n = 417) completed the Social Interaction Anxiety Scale (SIAS), the Social Phobia Scale (SPS) and a diagnostic interview. Regression analyses were used to test a range of predictors of s...

  17. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Science.gov (United States)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  18. A probabilistic bridge safety evaluation against floods.

    Science.gov (United States)

    Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho

    2016-01-01

    To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.

  19. Pharmacogenetic analysis of opioid dependence treatment dose and dropout rate.

    Science.gov (United States)

    Crist, Richard C; Li, James; Doyle, Glenn A; Gilbert, Alex; Dechairo, Bryan M; Berrettini, Wade H

    2018-01-01

    Currently, no pharmacogenetic tests for selecting an opioid-dependence pharmacotherapy have been approved by the US Food and Drug Administration. Determine the effects of variants in 11 genes on dropout rate and dose in patients receiving methadone or buprenorphine/naloxone (ClinicalTrials.gov Identifier: NCT00315341). Variants in six pharmacokinetic genes (CYP1A2, CYP2B6, CYP2C19, CYP2C9, CYP2D6, CYP3A4) and five pharmacodynamic genes (HTR2A, OPRM1, ADRA2A, COMT, SLC6A4) were genotyped in samples from a 24-week, randomized, open-label trial of methadone and buprenorphine/naloxone for the treatment of opioid dependence (n = 764; 68.7% male). Genotypes were then used to determine the metabolism phenotype for each pharmacokinetic gene. Phenotypes or genotypes for each gene were analyzed for association with dropout rate and mean dose. Genotype for 5-HTTLPR in the SLC6A4 gene was nominally associated with dropout rate when the methadone and buprenorphine/naloxone groups were combined. When the most significant variants associated with dropout rate were analyzed using pairwise analyses, SLC6A4 (5-HTTLPR) and COMT (Val158Met; rs4860) had nominally significant associations with dropout rate in methadone patients. None of the genes analyzed in the study was associated with mean dose of methadone or buprenorphine/naloxone. This study suggests that functional polymorphisms related to synaptic dopamine or serotonin levels may predict dropout rates during methadone treatment. Patients with the S/S genotype at 5-HTTLPR in SLC6A4 or the Val/Val genotype at Val158Met in COMT may require additional treatment to improve their chances of completing addiction treatment. Replication in other methadone patient populations will be necessary to ensure the validity of these findings.

  20. Probabilistic methods used in NUSS

    International Nuclear Information System (INIS)

    Fischer, J.; Giuliani, P.

    1985-01-01

    Probabilistic considerations are used implicitly or explicitly in all technical areas. In the NUSS codes and guides the two areas of design and siting are those where more use is made of these concepts. A brief review of the relevant documents in these two areas is made in this paper. It covers the documents where either probabilistic considerations are implied or where probabilistic approaches are recommended in the evaluation of situations and of events. In the siting guides the review mainly covers the area of seismic hydrological and external man-made events analysis, as well as some aspects of meteorological extreme events analysis. Probabilistic methods are recommended in the design guides but they are not made a requirement. There are several reasons for this, mainly lack of reliable data and the absence of quantitative safety limits or goals against which to judge the design analysis. As far as practical, engineering judgement should be backed up by quantitative probabilistic analysis. Examples are given and the concept of design basis as used in NUSS design guides is explained. (author)

  1. Reinforcer magnitude and rate dependency: evaluation of resistance-to-change mechanisms.

    Science.gov (United States)

    Pinkston, Jonathan W; Ginsburg, Brett C; Lamb, Richard J

    2014-10-01

    Under many circumstances, reinforcer magnitude appears to modulate the rate-dependent effects of drugs such that when schedules arrange for relatively larger reinforcer magnitudes rate dependency is attenuated compared with behavior maintained by smaller magnitudes. The current literature on resistance to change suggests that increased reinforcer density strengthens operant behavior, and such strengthening effects appear to extend to the temporal control of behavior. As rate dependency may be understood as a loss of temporal control, the effects of reinforcer magnitude on rate dependency may be due to increased resistance to disruption of temporally controlled behavior. In the present experiments, pigeons earned different magnitudes of grain during signaled components of a multiple FI schedule. Three drugs, clonidine, haloperidol, and morphine, were examined. All three decreased overall rates of key pecking; however, only the effects of clonidine were attenuated as reinforcer magnitude increased. An analysis of within-interval performance found rate-dependent effects for clonidine and morphine; however, these effects were not modulated by reinforcer magnitude. In addition, we included prefeeding and extinction conditions, standard tests used to measure resistance to change. In general, rate-decreasing effects of prefeeding and extinction were attenuated by increasing reinforcer magnitudes. Rate-dependent analyses of prefeeding showed rate-dependency following those tests, but in no case were these effects modulated by reinforcer magnitude. The results suggest that a resistance-to-change interpretation of the effects of reinforcer magnitude on rate dependency is not viable.

  2. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  3. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  4. Evaluation of Probabilistic Disease Forecasts.

    Science.gov (United States)

    Hughes, Gareth; Burnett, Fiona J

    2017-10-01

    The statistical evaluation of probabilistic disease forecasts often involves calculation of metrics defined conditionally on disease status, such as sensitivity and specificity. However, for the purpose of disease management decision making, metrics defined conditionally on the result of the forecast-predictive values-are also important, although less frequently reported. In this context, the application of scoring rules in the evaluation of probabilistic disease forecasts is discussed. An index of separation with application in the evaluation of probabilistic disease forecasts, described in the clinical literature, is also considered and its relation to scoring rules illustrated. Scoring rules provide a principled basis for the evaluation of probabilistic forecasts used in plant disease management. In particular, the decomposition of scoring rules into interpretable components is an advantageous feature of their application in the evaluation of disease forecasts.

  5. Probabilistic safety analysis using microcomputer

    International Nuclear Information System (INIS)

    Futuro Filho, F.L.F.; Mendes, J.E.S.; Santos, M.J.P. dos

    1990-01-01

    The main steps of execution of a Probabilistic Safety Assessment (PSA) are presented in this report, as the study of the system description, construction of event trees and fault trees, and the calculation of overall unavailability of the systems. It is also presented the use of microcomputer in performing some tasks, highlightning the main characteristics of a software to perform adequately the job. A sample case of fault tree construction and calculation is presented, using the PSAPACK software, distributed by the IAEA (International Atomic Energy Agency) for training purpose. (author)

  6. Probabilistic Modeling of the Renal Stone Formation Module

    Science.gov (United States)

    Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.

    2013-01-01

    The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously

  7. Strain rate dependency of laser sintered polyamide 12

    Directory of Open Access Journals (Sweden)

    Cook J.E.T.

    2015-01-01

    Full Text Available Parts processed by Additive Manufacturing can now be found across a wide range of applications, such as those in the aerospace and automotive industry in which the mechanical response must be optimised. Many of these applications are subjected to high rate or impact loading, yet it is believed that there is no prior research on the strain rate dependence in these materials. This research investigates the effect of strain rate and laser energy density on laser sintered polyamide 12. In the study presented here, parts produced using four different laser sintered energy densities were exposed to uniaxial compression tests at strain rates ranging from 10−3 to 10+3 s−1 at room temperature, and the dependence on these parameters is presented.

  8. Probabilistic assessments of fuel performance

    International Nuclear Information System (INIS)

    Kelppe, S.; Ranta-Puska, K.

    1998-01-01

    The probabilistic Monte Carlo Method, coupled with quasi-random sampling, is applied for the fuel performance analyses. By using known distributions of fabrication parameters and real power histories with their randomly selected combinations, and by making a large number of ENIGMA code calculations, one expects to find out the state of the whole reactor fuel. Good statistics requires thousands of runs. A sample case representing VVER-440 reactor fuel indicates relatively low fuel temperatures and mainly athermal fission gas release if any. The rod internal pressure remains typically below 2.5 MPa, which leaves a large margin to the system pressure of 12 MPa Gap conductance, an essential parameter in the accident evaluations, shows no decrease from its start-of-life value. (orig.)

  9. The pseudo-Boolean optimization approach to form the N-version software structure

    Science.gov (United States)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  10. Modality dependency of familiarity ratings of Japanese words.

    Science.gov (United States)

    Amano, S; Kondo, T; Kakehi, K

    1995-07-01

    Familiarity ratings for a large number of aurally and visually presented Japanese words wer measured for 11 subjects, in order to investigate the modality dependency of familiarity. The correlation coefficient between auditory and visual ratings was .808, which is lower than that observed for English words, suggesting that a substantial portion of the mental lexicon is modality dependent. It was shown that the modality dependency is greater for low-familiarity words than it is for medium- or high-familiarity words. This difference between the low- and the medium- or high-familiarity words has a relationship to orthography. That is, the dependency is larger in words consisting only of kanji, which may have multiple pronunciations and usually represent meaning, than it is in words consisting only of hiragana or katakana, which have a single pronunciation and usually do not represent meaning. These results indicate that the idiosyncratic characteristics of Japanese orthography contribute to the modality dependency.

  11. Probabilistic Analysis of the Quality Calculus

    DEFF Research Database (Denmark)

    Nielson, Hanne Riis; Nielson, Flemming

    2013-01-01

    We consider a fragment of the Quality Calculus, previously introduced for defensive programming of software components such that it becomes natural to plan for default behaviour in case the ideal behaviour fails due to unreliable communication. This paper develops a probabilistically based trust...... analysis supporting the Quality Calculus. It uses information about the probabilities that expected input will be absent in order to determine the trustworthiness of the data used for controlling the distributed system; the main challenge is to take accord of the stochastic dependency between some...

  12. Method and system for dynamic probabilistic risk assessment

    Science.gov (United States)

    Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)

    2013-01-01

    The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.

  13. Probabilistic Open Set Recognition

    Science.gov (United States)

    Jain, Lalit Prithviraj

    support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  14. Probabilistic broadcasting of mixed states

    International Nuclear Information System (INIS)

    Li Lvjun; Li Lvzhou; Wu Lihua; Zou Xiangfu; Qiu Daowen

    2009-01-01

    It is well known that the non-broadcasting theorem proved by Barnum et al is a fundamental principle of quantum communication. As we are aware, optimal broadcasting (OB) is the only method to broadcast noncommuting mixed states approximately. In this paper, motivated by the probabilistic cloning of quantum states proposed by Duan and Guo, we propose a new way for broadcasting noncommuting mixed states-probabilistic broadcasting (PB), and we present a sufficient condition for PB of mixed states. To a certain extent, we generalize the probabilistic cloning theorem from pure states to mixed states, and in particular, we generalize the non-broadcasting theorem, since the case that commuting mixed states can be exactly broadcast can be thought of as a special instance of PB where the success ratio is 1. Moreover, we discuss probabilistic local broadcasting (PLB) of separable bipartite states

  15. Probabilistic modeling of timber structures

    DEFF Research Database (Denmark)

    Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2007-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet...... Publication: www.jcss.ethz.ch; 2001] and of the COST action E24 ‘Reliability of Timber Structures' [COST Action E 24, Reliability of timber structures. Several meetings and Publications, Internet Publication: http://www.km.fgg.uni-lj.si/coste24/coste24.htm; 2005]. The present proposal is based on discussions...... and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for timber components. The recommended probabilistic model for these basic properties...

  16. Improved detection of congestive heart failure via probabilistic symbolic pattern recognition and heart rate variability metrics.

    Science.gov (United States)

    Mahajan, Ruhi; Viangteeravat, Teeradache; Akbilgic, Oguz

    2017-12-01

    A timely diagnosis of congestive heart failure (CHF) is crucial to evade a life-threatening event. This paper presents a novel probabilistic symbol pattern recognition (PSPR) approach to detect CHF in subjects from their cardiac interbeat (R-R) intervals. PSPR discretizes each continuous R-R interval time series by mapping them onto an eight-symbol alphabet and then models the pattern transition behavior in the symbolic representation of the series. The PSPR-based analysis of the discretized series from 107 subjects (69 normal and 38 CHF subjects) yielded discernible features to distinguish normal subjects and subjects with CHF. In addition to PSPR features, we also extracted features using the time-domain heart rate variability measures such as average and standard deviation of R-R intervals. An ensemble of bagged decision trees was used to classify two groups resulting in a five-fold cross-validation accuracy, specificity, and sensitivity of 98.1%, 100%, and 94.7%, respectively. However, a 20% holdout validation yielded an accuracy, specificity, and sensitivity of 99.5%, 100%, and 98.57%, respectively. Results from this study suggest that features obtained with the combination of PSPR and long-term heart rate variability measures can be used in developing automated CHF diagnosis tools. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Probabilistic Determination of Green Infrastructure Pollutant Removal Rates from the International Stormwater BMP Database

    Science.gov (United States)

    Gilliom, R.; Hogue, T. S.; McCray, J. E.

    2017-12-01

    There is a need for improved parameterization of stormwater best management practices (BMP) performance estimates to improve modeling of urban hydrology, planning and design of green infrastructure projects, and water quality crediting for stormwater management. Percent removal is commonly used to estimate BMP pollutant removal efficiency, but there is general agreement that this approach has significant uncertainties and is easily affected by site-specific factors. Additionally, some fraction of monitored BMPs have negative percent removal, so it is important to understand the probability that a BMP will provide the desired water quality function versus exacerbating water quality problems. The widely used k-C* equation has shown to provide a more adaptable and accurate method to model BMP contaminant attenuation, and previous work has begun to evaluate the strengths and weaknesses of the k-C* method. However, no systematic method exists for obtaining first-order removal rate constants needed to use the k-C* equation for stormwater BMPs; thus there is minimal application of the method. The current research analyzes existing water quality data in the International Stormwater BMP Database to provide screening-level parameterization of the k-C* equation for selected BMP types and analysis of factors that skew the distribution of efficiency estimates from the database. Results illustrate that while certain BMPs are more likely to provide desired contaminant removal than others, site- and design-specific factors strongly influence performance. For example, bioretention systems show both the highest and lowest removal rates of dissolved copper, total phosphorous, and total nitrogen. Exploration and discussion of this and other findings will inform the application of the probabilistic pollutant removal rate constants. Though data limitations exist, this research will facilitate improved accuracy of BMP modeling and ultimately aid decision-making for stormwater quality

  18. Supernova rates from the SUDARE VST-Omegacam search II. Rates in a galaxy sample

    Science.gov (United States)

    Botticella, M. T.; Cappellaro, E.; Greggio, L.; Pignata, G.; Della Valle, M.; Grado, A.; Limatola, L.; Baruffolo, A.; Benetti, S.; Bufano, F.; Capaccioli, M.; Cascone, E.; Covone, G.; De Cicco, D.; Falocco, S.; Haeussler, B.; Harutyunyan, V.; Jarvis, M.; Marchetti, L.; Napolitano, N. R.; Paolillo, M.; Pastorello, A.; Radovich, M.; Schipani, P.; Tomasella, L.; Turatto, M.; Vaccari, M.

    2017-02-01

    Aims: This is the second paper of a series in which we present measurements of the supernova (SN) rates from the SUDARE survey. The aim of this survey is to constrain the core collapse (CC) and Type Ia SN progenitors by analysing the dependence of their explosion rate on the properties of the parent stellar population averaging over a population of galaxies with different ages in a cosmic volume and in a galaxy sample. In this paper, we study the trend of the SN rates with the intrinsic colours, the star formation activity and the masses of the parent galaxies. To constrain the SN progenitors we compare the observed rates with model predictions assuming four progenitor models for SNe Ia with different distribution functions of the time intervals between the formation of the progenitor and the explosion, and a mass range of 8-40 M⊙ for CC SN progenitors. Methods: We considered a galaxy sample of approximately 130 000 galaxies and a SN sample of approximately 50 events. The wealth of photometric information for our galaxy sample allows us to apply the spectral energy distribution (SED) fitting technique to estimate the intrinsic rest frame colours, the stellar mass and star formation rate (SFR) for each galaxy in the sample. The galaxies have been separated into star-forming and quiescent galaxies, exploiting both the rest frame U-V vs. V-J colour-colour diagram and the best fit values of the specific star formation rate (sSFR) from the SED fitting. Results: We found that the SN Ia rate per unit mass is higher by a factor of six in the star-forming galaxies with respect to the passive galaxies, identified as such both on the U-V vs. V-J colour-colour diagram and for their sSFR. The SN Ia rate per unit mass is also higher in the less massive galaxies that are also younger. These results suggest a distribution of the delay times (DTD) less populated at long delay times than at short delays. The CC SN rate per unit mass is proportional to both the sSFR and the galaxy

  19. Structural reliability codes for probabilistic design

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    1997-01-01

    probabilistic code format has not only strong influence on the formal reliability measure, but also on the formal cost of failure to be associated if a design made to the target reliability level is considered to be optimal. In fact, the formal cost of failure can be different by several orders of size for two...... different, but by and large equally justifiable probabilistic code formats. Thus, the consequence is that a code format based on decision theoretical concepts and formulated as an extension of a probabilistic code format must specify formal values to be used as costs of failure. A principle of prudence...... is suggested for guiding the choice of the reference probabilistic code format for constant reliability. In the author's opinion there is an urgent need for establishing a standard probabilistic reliability code. This paper presents some considerations that may be debatable, but nevertheless point...

  20. Point process analyses of variations in smoking rate by setting, mood, gender, and dependence

    Science.gov (United States)

    Shiffman, Saul; Rathbun, Stephen L.

    2010-01-01

    The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683

  1. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    Science.gov (United States)

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  2. Probabilistic analysis of flaw distribution on structure under cyclic load

    International Nuclear Information System (INIS)

    Kwak, Sang Log; Choi, Young Hwan; Kim, Hho Jung

    2003-01-01

    Flaw geometries, applied stress, and material properties are major input variables for the fracture mechanics analysis. Probabilistic approach can be applied for the consideration of uncertainties within these input variables. But probabilistic analysis requires many assumptions due to the lack of initial flaw distributions data. In this study correlations are examined between initial flaw distributions and in-service flaw distributions on structures under cyclic load. For the analysis, LEFM theories and Monte Carlo simulation are applied. Result shows that in-service flaw distributions are determined by initial flaw distributions rather than fatigue crack growth rate. So initial flaw distribution can be derived from in-service flaw distributions

  3. Probabilistic modeling of fatigue crack growth in Ti-6Al-4V

    International Nuclear Information System (INIS)

    Soboyejo, W.O.; Shen, W.; Soboyejo, A.B.O.

    2001-01-01

    This paper presents the results of a combined experimental and analytical study of the probabilistic nature of fatigue crack growth in Ti-6Al-4V. A simple experimental fracture mechanics framework is presented for the determination of statistical fatigue crack growth parameters from two fatigue tests. The experimental studies show that the variabilities in long fatigue crack growth rate data and the Paris coefficient are well described by the log-normal distributions. The variabilities in the Paris exponent are also shown to be well characterized by a normal distribution. The measured statistical distributions are incorporated into a probabilistic fracture mechanics framework for the estimation of material reliability. The implications of the results are discussed for the probabilistic analysis of fatigue crack growth in engineering components and structures. (orig.)

  4. Confluence Reduction for Probabilistic Systems (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2010-01-01

    This paper presents a novel technique for state space reduction of probabilistic specifications, based on a newly developed notion of confluence for probabilistic automata. We prove that this reduction preserves branching probabilistic bisimulation and can be applied on-the-fly. To support the

  5. Super-transient scaling in time-delay autonomous Boolean network motifs

    Energy Technology Data Exchange (ETDEWEB)

    D' Huys, Otti, E-mail: otti.dhuys@phy.duke.edu; Haynes, Nicholas D. [Department of Physics, Duke University, Durham, North Carolina 27708 (United States); Lohmann, Johannes [Department of Physics, Duke University, Durham, North Carolina 27708 (United States); Institut für Theoretische Physik, Technische Universität Berlin, Hardenbergstraße 36, 10623 Berlin (Germany); Gauthier, Daniel J. [Department of Physics, Duke University, Durham, North Carolina 27708 (United States); Department of Physics, The Ohio State University, Columbus, Ohio 43210 (United States)

    2016-09-15

    Autonomous Boolean networks are commonly used to model the dynamics of gene regulatory networks and allow for the prediction of stable dynamical attractors. However, most models do not account for time delays along the network links and noise, which are crucial features of real biological systems. Concentrating on two paradigmatic motifs, the toggle switch and the repressilator, we develop an experimental testbed that explicitly includes both inter-node time delays and noise using digital logic elements on field-programmable gate arrays. We observe transients that last millions to billions of characteristic time scales and scale exponentially with the amount of time delays between nodes, a phenomenon known as super-transient scaling. We develop a hybrid model that includes time delays along network links and allows for stochastic variation in the delays. Using this model, we explain the observed super-transient scaling of both motifs and recreate the experimentally measured transient distributions.

  6. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  7. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  8. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  9. Learning on probabilistic manifolds in massive fusion databases: Application to confinement regime identification

    International Nuclear Information System (INIS)

    Verdoolaege, Geert; Van Oost, Guido

    2012-01-01

    Highlights: ► We present an integrated framework for pattern recognition in fusion data. ► We model measurement uncertainty through an appropriate probability distribution. ► We use the geodesic distance on probabilistic manifolds as a similarity measure. ► We apply the framework to confinement mode classification. ► The classification accuracy benefits from uncertainty information and its geometry. - Abstract: We present an integrated framework for (real-time) pattern recognition in fusion data. The main premise is the inherent probabilistic nature of measurements of plasma quantities. We propose the geodesic distance on probabilistic manifolds as a similarity measure between data points. Substructure induced by data dependencies may further reduce the dimensionality and redundancy of the data set. We present an application to confinement mode classification, showing the distinct advantage obtained by considering the measurement uncertainty and its geometry.

  10. Determination of the temperature dependence of tungsten erosion

    International Nuclear Information System (INIS)

    Maier, H.; Greuner, H.; Toussaint, U. von; Balden, M.; Böswirth, B.; Elgeti, S.

    2015-01-01

    We present the results of erosion measurements on actively cooled tungsten samples at quasi-constant surface temperature conditions performed in the high heat flux facility GLADIS. The samples were exposed to a H beam at a central power density of 10 MW/m 2 up to a fluence of 10 26 m −2 . We observe a weak temperature dependence of the erosion yield. The data are compared with similar data obtained from loading with a H beam with He admixture. Both datasets are analysed in a probabilistic approach. We obtain activation energies of 0.04 eV and 0.06 eV for the cases with and without He, respectively

  11. A probabilistic sampling method (PSM for estimating geographic distance to health services when only the region of residence is known

    Directory of Open Access Journals (Sweden)

    Peek-Asa Corinne

    2011-01-01

    Full Text Available Abstract Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM, which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks within each participant's reported region (ZIP Code, weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV. We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance

  12. A probabilistic sampling method (PSM) for estimating geographic distance to health services when only the region of residence is known

    Science.gov (United States)

    2011-01-01

    Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM), which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks) within each participant's reported region (ZIP Code), weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV). We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance estimates. The algorithm

  13. Applying probabilistic well-performance parameters to assessments of shale-gas resources

    Science.gov (United States)

    Charpentier, Ronald R.; Cook, Troy

    2010-01-01

    In assessing continuous oil and gas resources, such as shale gas, it is important to describe not only the ultimately producible volumes, but also the expected well performance. This description is critical to any cost analysis or production scheduling. A probabilistic approach facilitates (1) the inclusion of variability in well performance within a continuous accumulation, and (2) the use of data from developed accumulations as analogs for the assessment of undeveloped accumulations. In assessing continuous oil and gas resources of the United States, the U.S. Geological Survey analyzed production data from many shale-gas accumulations. Analyses of four of these accumulations (the Barnett, Woodford, Fayetteville, and Haynesville shales) are presented here as examples of the variability of well performance. For example, the distribution of initial monthly production rates for Barnett vertical wells shows a noticeable change with time, first increasing because of improved completion practices, then decreasing from a combination of decreased reservoir pressure (in infill wells) and drilling in less productive areas. Within a partially developed accumulation, historical production data from that accumulation can be used to estimate production characteristics of undrilled areas. An understanding of the probabilistic relations between variables, such as between initial production and decline rates, can improve estimates of ultimate production. Time trends or spatial trends in production data can be clarified by plots and maps. The data can also be divided into subsets depending on well-drilling or well-completion techniques, such as vertical in relation to horizontal wells. For hypothetical or lightly developed accumulations, one can either make comparisons to a specific well-developed accumulation or to the entire range of available developed accumulations. Comparison of the distributions of initial monthly production rates of the four shale-gas accumulations that were

  14. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    Directory of Open Access Journals (Sweden)

    Megan A Cummins

    2014-03-01

    Full Text Available Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP. The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz and fast (2 Hz rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of

  15. A bound for the convergence rate of parallel tempering for sampling restricted Boltzmann machines

    DEFF Research Database (Denmark)

    Fischer, Asja; Igel, Christian

    2015-01-01

    on sampling. Parallel tempering (PT), an MCMC method that maintains several replicas of the original chain at higher temperatures, has been successfully applied for RBM training. We present the first analysis of the convergence rate of PT for sampling from binary RBMs. The resulting bound on the rate...... of convergence of the PT Markov chain shows an exponential dependency on the size of one layer and the absolute values of the RBM parameters. It is minimized by a uniform spacing of the inverse temperatures, which is often used in practice. Similarly as in the derivation of bounds on the approximation error...... for contrastive divergence learning, our bound on the mixing time implies an upper bound on the error of the gradient approximation when the method is used for RBM training....

  16. Probabilistic finite elements for fracture mechanics

    Science.gov (United States)

    Besterfield, Glen

    1988-01-01

    The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.

  17. Validation of the probabilistic approach for the analysis of PWR transients

    International Nuclear Information System (INIS)

    Amesz, J.; Francocci, G.F.; Clarotti, C.

    1978-01-01

    This paper reviews the pilot study at present being carried out on the validation of probabilistic methodology with real data coming from the operational records of the PWR power station at Obrigheim (KWO, Germany) operating since 1969. The aim of this analysis is to validate the a priori predictions of reactor transients performed by a probabilistic methodology, with the posteriori analysis of transients that actually occurred at a power station. Two levels of validation have been distinguished: (a) validation of the rate of occurrence of initiating events; (b) validation of the transient-parameter amplitude (i.e., overpressure) caused by the above mentioned initiating events. The paper describes the a priori calculations performed using a fault-tree analysis by means of a probabilistic code (SALP 3) and event-trees coupled with a PWR system deterministic computer code (LOOP 7). Finally the principle results of these analyses are presented and critically reviewed

  18. Application of simulation techniques in the probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    De Ruyter van Steveninck, J.L.

    1995-03-01

    The Monte Carlo simulation is applied on a model of the fracture mechanics in order to assess the applicability of this simulation technique in the probabilistic fracture mechanics. By means of the fracture mechanics model the brittle fracture of a steel container or pipe with defects can be predicted. By means of the Monte Carlo simulation also the uncertainty regarding failures can be determined. Based on the variations in the toughness of the fracture and the defect dimensions the distribution of the chance of failure is determined. Also attention is paid to the impact of dependency between uncertain variables. Furthermore, the influence of the applied distributions of the uncertain variables and non-destructive survey on the chance of failure is analyzed. The Monte Carlo simulation results agree quite well with the results of other methods from the probabilistic fracture mechanics. If an analytic expression can be found for the chance of failure, it is possible to determine the variation of the chance of failure, next to an estimation of the chance of failure. It also appears that the dependency between the uncertain variables has a large impact on the chance of failure. It is also concluded from the simulation that the chance of failure strongly depends on the crack depth, and therefore of the distribution of the crack depth. 15 figs., 7 tabs., 12 refs

  19. A tube seepage meter for in situ measurement of seepage rate and groundwater sampling

    Science.gov (United States)

    Solder, John; Gilmore, Troy E.; Genereux, David P.; Solomon, D. Kip

    2016-01-01

    We designed and evaluated a “tube seepage meter” for point measurements of vertical seepage rates (q), collecting groundwater samples, and estimating vertical hydraulic conductivity (K) in streambeds. Laboratory testing in artificial streambeds show that seepage rates from the tube seepage meter agreed well with expected values. Results of field testing of the tube seepage meter in a sandy-bottom stream with a mean seepage rate of about 0.5 m/day agreed well with Darcian estimates (vertical hydraulic conductivity times head gradient) when averaged over multiple measurements. The uncertainties in q and K were evaluated with a Monte Carlo method and are typically 20% and 60%, respectively, for field data, and depend on the magnitude of the hydraulic gradient and the uncertainty in head measurements. The primary advantages of the tube seepage meter are its small footprint, concurrent and colocated assessments of q and K, and that it can also be configured as a self-purging groundwater-sampling device.

  20. Predictors of social anxiety in an opioid dependent sample and a control sample.

    Science.gov (United States)

    Shand, Fiona L; Degenhardt, Louisa; Nelson, Elliot C; Mattick, Richard P

    2010-01-01

    Compared to other mental health problems, social anxiety is under-acknowledged amongst opioid dependent populations. This study aimed to assess levels of social anxiety and identify its predictors in an opioid dependent sample and a matched control group. Opioid dependent participants (n=1385) and controls (n=417) completed the Social Interaction Anxiety Scale (SIAS), the Social Phobia Scale (SPS) and a diagnostic interview. Regression analyses were used to test a range of predictors of social anxiety. Opioid dependent cases had higher mean scores on both scales compared to controls. Predictors of social anxiety centred on emotional rejection in childhood, either by parents or peers. For opioid dependent cases, but not controls, lifetime non-opioid substance dependence (cannabis, sedatives, and tobacco) was associated with higher levels of social anxiety. However, much of the variance in social anxiety remains unexplained for this population.

  1. Reinforcement magnitude modulation of rate dependent effects in pigeons and rats.

    Science.gov (United States)

    Ginsburg, Brett C; Pinkston, Jonathan W; Lamb, R J

    2011-08-01

    Response rate can influence the behavioral effects of many drugs. Reinforcement magnitude may also influence drug effects. Further, reinforcement magnitude can influence rate-dependent effects. For example, in an earlier report, we showed that rate-dependent effects of two antidepressants depended on reinforcement magnitude. The ability of reinforcement magnitude to interact with rate-dependency has not been well characterized. It is not known whether our previous results are specific to antidepressants or generalize to other drug classes. Here, we further examine rate-magnitude interactions by studying effects of two stimulants (d-amphetamine [0.32-5.6 mg/kg] and cocaine [0.32-10 mg/kg]) and two sedatives (chlordiazepoxide [1.78-32 mg/kg] and pentobarbital [1.0-17.8 mg/kg]) in pigeons responding under a 3-component multiple fixed-interval (FI) 300-s schedule maintained by 2-, 4-, or 8-s of food access. We also examine the effects of d-amphetamine [0.32-3.2 mg/kg] and pentobarbital [1.8-10 mg/kg] in rats responding under a similar multiple FI300-s schedule maintained by 2- or 10- food pellet (45 mg) delivery. In pigeons, cocaine and, to a lesser extent, chlordiazepoxide exerted rate-dependent effects that were diminished by increasing durations of food access. The relationship was less apparent for pentobarbital, and not present for d-amphetamine. In rats, rate-dependent effects of pentobarbital and d-amphetamine were not modulated by reinforcement magnitude. In conclusion, some drugs appear to exert rate-dependent effect which are diminished when reinforcement magnitude is relatively high. Subsequent analysis of the rate-dependency data suggest the effects of reinforcement magnitude may be due to a diminution of drug-induced increases in low-rate behavior that occurs early in the fixed-interval. (c) 2011 APA, all rights reserved.

  2. Rats bred for high alcohol drinking are more sensitive to delayed and probabilistic outcomes.

    Science.gov (United States)

    Wilhelm, C J; Mitchell, S H

    2008-10-01

    Alcoholics and heavy drinkers score higher on measures of impulsivity than nonalcoholics and light drinkers. This may be because of factors that predate drug exposure (e.g. genetics). This study examined the role of genetics by comparing impulsivity measures in ethanol-naive rats selectively bred based on their high [high alcohol drinking (HAD)] or low [low alcohol drinking (LAD)] consumption of ethanol. Replicates 1 and 2 of the HAD and LAD rats, developed by the University of Indiana Alcohol Research Center, completed two different discounting tasks. Delay discounting examines sensitivity to rewards that are delayed in time and is commonly used to assess 'choice' impulsivity. Probability discounting examines sensitivity to the uncertain delivery of rewards and has been used to assess risk taking and risk assessment. High alcohol drinking rats discounted delayed and probabilistic rewards more steeply than LAD rats. Discount rates associated with probabilistic and delayed rewards were weakly correlated, while bias was strongly correlated with discount rate in both delay and probability discounting. The results suggest that selective breeding for high alcohol consumption selects for animals that are more sensitive to delayed and probabilistic outcomes. Sensitivity to delayed or probabilistic outcomes may be predictive of future drinking in genetically predisposed individuals.

  3. Probabilistic Mu-Calculus

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Mardare, Radu Iulian; Xue, Bingtian

    2016-01-01

    We introduce a version of the probabilistic µ-calculus (PMC) built on top of a probabilistic modal logic that allows encoding n-ary inequational conditions on transition probabilities. PMC extends previously studied calculi and we prove that, despite its expressiveness, it enjoys a series of good...... metaproperties. Firstly, we prove the decidability of satisfiability checking by establishing the small model property. An algorithm for deciding the satisfiability problem is developed. As a second major result, we provide a complete axiomatization for the alternation-free fragment of PMC. The completeness proof...

  4. Use of probabilistic methods for analysis of cost and duration uncertainties in a decision analysis framework

    International Nuclear Information System (INIS)

    Boak, D.M.; Painton, L.

    1995-01-01

    Probabilistic forecasting techniques have been used in many risk assessment and performance assessment applications on radioactive waste disposal projects such as Yucca Mountain and the Waste Isolation Pilot Plant (WIPP). Probabilistic techniques such as Monte Carlo and Latin Hypercube sampling methods are routinely used to treat uncertainties in physical parameters important in simulating radionuclide transport in a coupled geohydrologic system and assessing the ability of that system to comply with regulatory release limits. However, the use of probabilistic techniques in the treatment of uncertainties in the cost and duration of programmatic alternatives on risk and performance assessment projects is less common. Where significant uncertainties exist and where programmatic decisions must be made despite existing uncertainties, probabilistic techniques may yield important insights into decision options, especially when used in a decision analysis framework and when properly balanced with deterministic analyses. For relatively simple evaluations, these types of probabilistic evaluations can be made using personal computer-based software

  5. Generalized Boolean logic Driven Markov Processes: A powerful modeling framework for Model-Based Safety Analysis of dynamic repairable and reconfigurable systems

    International Nuclear Information System (INIS)

    Piriou, Pierre-Yves; Faure, Jean-Marc; Lesage, Jean-Jacques

    2017-01-01

    This paper presents a modeling framework that permits to describe in an integrated manner the structure of the critical system to analyze, by using an enriched fault tree, the dysfunctional behavior of its components, by means of Markov processes, and the reconfiguration strategies that have been planned to ensure safety and availability, with Moore machines. This framework has been developed from BDMP (Boolean logic Driven Markov Processes), a previous framework for dynamic repairable systems. First, the contribution is motivated by pinpointing the limitations of BDMP to model complex reconfiguration strategies and the failures of the control of these strategies. The syntax and semantics of GBDMP (Generalized Boolean logic Driven Markov Processes) are then formally defined; in particular, an algorithm to analyze the dynamic behavior of a GBDMP model is developed. The modeling capabilities of this framework are illustrated on three representative examples. Last, qualitative and quantitative analysis of GDBMP models highlight the benefits of the approach.

  6. Quantum probabilistic logic programming

    Science.gov (United States)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  7. Improved Point-source Detection in Crowded Fields Using Probabilistic Cataloging

    Science.gov (United States)

    Portillo, Stephen K. N.; Lee, Benjamin C. G.; Daylan, Tansu; Finkbeiner, Douglas P.

    2017-10-01

    Cataloging is challenging in crowded fields because sources are extremely covariant with their neighbors and blending makes even the number of sources ambiguous. We present the first optical probabilistic catalog, cataloging a crowded (˜0.1 sources per pixel brighter than 22nd mag in F606W) Sloan Digital Sky Survey r-band image from M2. Probabilistic cataloging returns an ensemble of catalogs inferred from the image and thus can capture source-source covariance and deblending ambiguities. By comparing to a traditional catalog of the same image and a Hubble Space Telescope catalog of the same region, we show that our catalog ensemble better recovers sources from the image. It goes more than a magnitude deeper than the traditional catalog while having a lower false-discovery rate brighter than 20th mag. We also present an algorithm for reducing this catalog ensemble to a condensed catalog that is similar to a traditional catalog, except that it explicitly marginalizes over source-source covariances and nuisance parameters. We show that this condensed catalog has a similar completeness and false-discovery rate to the catalog ensemble. Future telescopes will be more sensitive, and thus more of their images will be crowded. Probabilistic cataloging performs better than existing software in crowded fields and so should be considered when creating photometric pipelines in the Large Synoptic Survey Telescope era.

  8. Formalizing Probabilistic Safety Claims

    Science.gov (United States)

    Herencia-Zapana, Heber; Hagen, George E.; Narkawicz, Anthony J.

    2011-01-01

    A safety claim for a system is a statement that the system, which is subject to hazardous conditions, satisfies a given set of properties. Following work by John Rushby and Bev Littlewood, this paper presents a mathematical framework that can be used to state and formally prove probabilistic safety claims. It also enables hazardous conditions, their uncertainties, and their interactions to be integrated into the safety claim. This framework provides a formal description of the probabilistic composition of an arbitrary number of hazardous conditions and their effects on system behavior. An example is given of a probabilistic safety claim for a conflict detection algorithm for aircraft in a 2D airspace. The motivation for developing this mathematical framework is that it can be used in an automated theorem prover to formally verify safety claims.

  9. Pulse and integral optically stimulated luminescence (OSL). Similarities and dissimilarities to thermoluminescence (TL) dose dependence and dose-rate effects

    International Nuclear Information System (INIS)

    Chen, R.; Leung, P.L.

    2000-01-01

    Optically stimulated luminescence (OSL) and thermoluminescence (Tl) are two possible methods to monitor the absorbed radiation in solid samples, and therefore are utilized for dosimetry. For this application, two properties are desirable, namely, linear dose dependence of the measured quantity and dose-rate independence. For Tl, different kinds of super linear dose dependence have been reported in the literature in different materials, and in some cases, dose-rate dependence has also been found. These have been explained as being the result of competition. In OSL, some recent works reported on super linear dose dependence in annealed samples. In the present work, we explain the possible occurrence of these phenomena in OSL by solving numerically the relevant rate equations governing the process during irradiation, relaxation and read-out (heating or light stimulation). The results show that for short pulse OSL, quadratic dose dependence can be expected when only one trapping state and one kind of recombination center are involved and when the excitation starts with empty traps and centers. With the short pulse OSL, the calculation also reveals a possible dose-rate effect. Under the same circumstances, the area under the OSL curve depends linearly on the dose. The dependence of the whole area under the OSL curve on the dose is shown to be super linear when a disconnected trapping state or radiationless center take part in the process. Also, dose-rate effect can be expected in these cases, although no experimental effect of this sort has been reported so far. In pulse OSL, the analogy is made between the measured intensity and the initial rise range of non-first order Tl, whereas for the total area OSL, there is a nearly full analogy with the dose behavior of the Tl maximum. (Author)

  10. Variate generation for probabilistic fracture mechanics and fitness-for-service studies

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Atomic Energy of Canada Limited is conducting studies in Probabilistic Fracture Mechanics. These studies are being conducted as part of a fitness-for-service programme in support of CANDU reactors. The Monte Carlo analyses, which form part of the Probabilistic Fracture Mechanics studies, require that variates can be sampled from probability density functions. Accurate pseudo-random numbers are necessary for accurate variate generation. This report details the principles of variate generation, and describes the production and testing of pseudo-random numbers. A new algorithm has been produced for the correct performance of the lattice test for the independence of pseudo-random numbers. Two new pseudo-random number generators have been produced. These generators have excellent randomness properties and can be made fully machine-independent. Versions, in FORTRAN, for VAX and CDC computers are given. Accurate and efficient algorithms for the generation of variates from the specialized probability density functions of Probabilistic Fracture Mechanics are given. 38 refs

  11. Compression of Probabilistic XML documents

    NARCIS (Netherlands)

    Veldman, Irma

    2009-01-01

    Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

  12. Probabilistic Infinite Secret Sharing

    OpenAIRE

    Csirmaz, László

    2013-01-01

    The study of probabilistic secret sharing schemes using arbitrary probability spaces and possibly infinite number of participants lets us investigate abstract properties of such schemes. It highlights important properties, explains why certain definitions work better than others, connects this topic to other branches of mathematics, and might yield new design paradigms. A probabilistic secret sharing scheme is a joint probability distribution of the shares and the secret together with a colle...

  13. Toward a Principled Sampling Theory for Quasi-Orders.

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  14. Toward a Principled Sampling Theory for Quasi-Orders

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  15. Graphical models for inference under outcome-dependent sampling

    DEFF Research Database (Denmark)

    Didelez, V; Kreiner, S; Keiding, N

    2010-01-01

    a node for the sampling indicator, assumptions about sampling processes can be made explicit. We demonstrate how to read off such graphs whether consistent estimation of the association between exposure and outcome is possible. Moreover, we give sufficient graphical conditions for testing and estimating......We consider situations where data have been collected such that the sampling depends on the outcome of interest and possibly further covariates, as for instance in case-control studies. Graphical models represent assumptions about the conditional independencies among the variables. By including...

  16. Prescribed burning impact on forest soil properties--a Fuzzy Boolean Nets approach.

    Science.gov (United States)

    Castro, Ana C Meira; Paulo Carvalho, Joao; Ribeiro, S

    2011-02-01

    The Portuguese northern forests are often and severely affected by wildfires during the Summer season. These occurrences significantly affect and negatively impact all ecosystems, namely soil, fauna and flora. In order to reduce the occurrences of natural wildfires, some measures to control the availability of fuel mass are regularly implemented. Those preventive actions concern mainly prescribed burnings and vegetation pruning. This work reports on the impact of a prescribed burning on several forest soil properties, namely pH, soil moisture, organic matter content and iron content, by monitoring the soil self-recovery capabilities during a one year span. The experiments were carried out in soil cover over a natural site of Andaluzitic schist, in Gramelas, Caminha, Portugal, which was kept intact from prescribed burnings during a period of four years. Soil samples were collected from five plots at three different layers (0-3, 3-6 and 6-18) 1 day before prescribed fire and at regular intervals after the prescribed fire. This paper presents an approach where Fuzzy Boolean Nets (FBN) and Fuzzy reasoning are used to extract qualitative knowledge regarding the effect of prescribed fire burning on soil properties. FBN were chosen due to the scarcity on available quantitative data. The results showed that soil properties were affected by prescribed burning practice and were unable to recover their initial values after one year. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  18. Growth-rate-dependent dynamics of a bacterial genetic oscillator

    Science.gov (United States)

    Osella, Matteo; Lagomarsino, Marco Cosentino

    2013-01-01

    Gene networks exhibiting oscillatory dynamics are widespread in biology. The minimal regulatory designs giving rise to oscillations have been implemented synthetically and studied by mathematical modeling. However, most of the available analyses generally neglect the coupling of regulatory circuits with the cellular “chassis” in which the circuits are embedded. For example, the intracellular macromolecular composition of fast-growing bacteria changes with growth rate. As a consequence, important parameters of gene expression, such as ribosome concentration or cell volume, are growth-rate dependent, ultimately coupling the dynamics of genetic circuits with cell physiology. This work addresses the effects of growth rate on the dynamics of a paradigmatic example of genetic oscillator, the repressilator. Making use of empirical growth-rate dependencies of parameters in bacteria, we show that the repressilator dynamics can switch between oscillations and convergence to a fixed point depending on the cellular state of growth, and thus on the nutrients it is fed. The physical support of the circuit (type of plasmid or gene positions on the chromosome) also plays an important role in determining the oscillation stability and the growth-rate dependence of period and amplitude. This analysis has potential application in the field of synthetic biology, and suggests that the coupling between endogenous genetic oscillators and cell physiology can have substantial consequences for their functionality.

  19. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation.

    Science.gov (United States)

    Pearce, Marcus T

    2018-05-11

    Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  20. An analog ensemble for short-term probabilistic solar power forecast

    International Nuclear Information System (INIS)

    Alessandrini, S.; Delle Monache, L.; Sperati, S.; Cervone, G.

    2015-01-01

    Highlights: • A novel method for solar power probabilistic forecasting is proposed. • The forecast accuracy does not depend on the nominal power. • The impact of climatology on forecast accuracy is evaluated. - Abstract: The energy produced by photovoltaic farms has a variable nature depending on astronomical and meteorological factors. The former are the solar elevation and the solar azimuth, which are easily predictable without any uncertainty. The amount of liquid water met by the solar radiation within the troposphere is the main meteorological factor influencing the solar power production, as a fraction of short wave solar radiation is reflected by the water particles and cannot reach the earth surface. The total cloud cover is a meteorological variable often used to indicate the presence of liquid water in the troposphere and has a limited predictability, which is also reflected on the global horizontal irradiance and, as a consequence, on solar photovoltaic power prediction. This lack of predictability makes the solar energy integration into the grid challenging. A cost-effective utilization of solar energy over a grid strongly depends on the accuracy and reliability of the power forecasts available to the Transmission System Operators (TSOs). Furthermore, several countries have in place legislation requiring solar power producers to pay penalties proportional to the errors of day-ahead energy forecasts, which makes the accuracy of such predictions a determining factor for producers to reduce their economic losses. Probabilistic predictions can provide accurate deterministic forecasts along with a quantification of their uncertainty, as well as a reliable estimate of the probability to overcome a certain production threshold. In this paper we propose the application of an analog ensemble (AnEn) method to generate probabilistic solar power forecasts (SPF). The AnEn is based on an historical set of deterministic numerical weather prediction (NWP) model