DEFF Research Database (Denmark)
Vacca, Alessandro; Prato, Carlo Giacomo; Meloni, Italo
2015-01-01
is the dependency of the parameter estimates from the choice set generation technique. Bias introduced in model estimation has been corrected only for the random walk algorithm, which has problematic applicability to large-scale networks. This study proposes a correction term for the sampling probability of routes...
DEFF Research Database (Denmark)
Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik
2010-01-01
of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities......The construction of detailed geological models for heterogeneous settings such as clay till is important to describe transport processes, particularly with regard to potential contamination pathways. In low-permeability clay matrices transport is controlled by diffusion, but fractures and sand......-lenses facilitate local advective flow. In glacial settings these geological features occur at diverse extent, geometry, degree of deformation, and spatial distribution. The high level of heterogeneity requires extensive data collection, respectively detailed geological mapping. However, when characterising...
Xie, Xin-Ping; Xie, Yu-Feng; Wang, Hong-Qiang
2017-08-23
Large-scale accumulation of omics data poses a pressing challenge of integrative analysis of multiple data sets in bioinformatics. An open question of such integrative analysis is how to pinpoint consistent but subtle gene activity patterns across studies. Study heterogeneity needs to be addressed carefully for this goal. This paper proposes a regulation probability model-based meta-analysis, jGRP, for identifying differentially expressed genes (DEGs). The method integrates multiple transcriptomics data sets in a gene regulatory space instead of in a gene expression space, which makes it easy to capture and manage data heterogeneity across studies from different laboratories or platforms. Specifically, we transform gene expression profiles into a united gene regulation profile across studies by mathematically defining two gene regulation events between two conditions and estimating their occurring probabilities in a sample. Finally, a novel differential expression statistic is established based on the gene regulation profiles, realizing accurate and flexible identification of DEGs in gene regulation space. We evaluated the proposed method on simulation data and real-world cancer datasets and showed the effectiveness and efficiency of jGRP in identifying DEGs identification in the context of meta-analysis. Data heterogeneity largely influences the performance of meta-analysis of DEGs identification. Existing different meta-analysis methods were revealed to exhibit very different degrees of sensitivity to study heterogeneity. The proposed method, jGRP, can be a standalone tool due to its united framework and controllable way to deal with study heterogeneity.
Model uncertainty and probability
International Nuclear Information System (INIS)
Parry, G.W.
1994-01-01
This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example
Model uncertainty: Probabilities for models?
International Nuclear Information System (INIS)
Winkler, R.L.
1994-01-01
Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising
Probability and stochastic modeling
Rotar, Vladimir I
2012-01-01
Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...
Probability matrix decomposition models
Maris, E.; DeBoeck, P.; Mechelen, I. van
1996-01-01
In this paper, we consider a class of models for two-way matrices with binary entries of 0 and 1. First, we consider Boolean matrix decomposition, conceptualize it as a latent response model (LRM) and, by making use of this conceptualization, generalize it to a larger class of matrix decomposition
Comparative analysis through probability distributions of a data set
Cristea, Gabriel; Constantinescu, Dan Mihai
2018-02-01
In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.
Sets, Probability and Statistics: The Mathematics of Life Insurance.
Clifford, Paul C.; And Others
The practical use of such concepts as sets, probability and statistics are considered by many to be vital and necessary to our everyday life. This student manual is intended to familiarize students with these concepts and to provide practice using real life examples. It also attempts to illustrate how the insurance industry uses such mathematic…
Facilitating normative judgments of conditional probability: frequency or nested sets?
Yamagishi, Kimihiko
2003-01-01
Recent probability judgment research contrasts two opposing views. Some theorists have emphasized the role of frequency representations in facilitating probabilistic correctness; opponents have noted that visualizing the probabilistic structure of the task sufficiently facilitates normative reasoning. In the current experiment, the following conditional probability task, an isomorph of the "Problem of Three Prisoners" was tested. "A factory manufactures artificial gemstones. Each gemstone has a 1/3 chance of being blurred, a 1/3 chance of being cracked, and a 1/3 chance of being clear. An inspection machine removes all cracked gemstones, and retains all clear gemstones. However, the machine removes 1/2 of the blurred gemstones. What is the chance that a gemstone is blurred after the inspection?" A 2 x 2 design was administered. The first variable was the use of frequency instruction. The second manipulation was the use of a roulette-wheel diagram that illustrated a "nested-sets" relationship between the prior and the posterior probabilities. Results from two experiments showed that frequency alone had modest effects, while the nested-sets instruction achieved a superior facilitation of normative reasoning. The third experiment compared the roulette-wheel diagram to tree diagrams that also showed the nested-sets relationship. The roulette-wheel diagram outperformed the tree diagrams in facilitation of probabilistic reasoning. Implications for understanding the nature of intuitive probability judgments are discussed.
Stochastic Modeling of Climatic Probabilities.
1979-11-01
students who contributed in a major way to the success of the project are Sarah Autrey, Jeff Em erson, Karl Grammel , Tom licknor and Debbie Wa i te. A...sophisticati . n an d cost of weapons systems and the recognition that the environment di-grades or offers opportunities h ~s led to tile requirement for...First , make a h istogram of the data , an d then “smooth” the histogram to obtain a frequency distribution (probability density function). The
The transition probabilities of the reciprocity model
Snijders, T.A.B.
1999-01-01
The reciprocity model is a continuous-time Markov chain model used for modeling longitudinal network data. A new explicit expression is derived for its transition probability matrix. This expression can be checked relatively easily. Some properties of the transition probabilities are given, as well
Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim
2013-01-01
The decision to pass or fail a medical student is a "high stakes" one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the…
Comparing linear probability model coefficients across groups
DEFF Research Database (Denmark)
Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt
2015-01-01
This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....
Correlations and Non-Linear Probability Models
DEFF Research Database (Denmark)
Breen, Richard; Holm, Anders; Karlson, Kristian Bernt
2014-01-01
Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations betwee...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....
Modelling the probability of building fires
Directory of Open Access Journals (Sweden)
Vojtěch Barták
2014-12-01
Full Text Available Systematic spatial risk analysis plays a crucial role in preventing emergencies.In the Czech Republic, risk mapping is currently based on the risk accumulationprinciple, area vulnerability, and preparedness levels of Integrated Rescue Systemcomponents. Expert estimates are used to determine risk levels for individualhazard types, while statistical modelling based on data from actual incidents andtheir possible causes is not used. Our model study, conducted in cooperation withthe Fire Rescue Service of the Czech Republic as a model within the Liberec andHradec Králové regions, presents an analytical procedure leading to the creation ofbuilding fire probability maps based on recent incidents in the studied areas andon building parameters. In order to estimate the probability of building fires, aprediction model based on logistic regression was used. Probability of fire calculatedby means of model parameters and attributes of specific buildings can subsequentlybe visualized in probability maps.
Sampling, Probability Models and Statistical Reasoning Statistical ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...
Sampling, Probability Models and Statistical Reasoning Statistical
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...
Uncertainty the soul of modeling, probability & statistics
Briggs, William
2016-01-01
This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance". The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, suc...
Comparing coefficients of nested nonlinear probability models
DEFF Research Database (Denmark)
Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders
2011-01-01
In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...
Applied probability models with optimization applications
Ross, Sheldon M
1992-01-01
Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.
Toward General Analysis of Recursive Probability Models
Pless, Daniel; Luger, George
2013-01-01
There is increasing interest within the research community in the design and use of recursive probability models. Although there still remains concern about computational complexity costs and the fact that computing exact solutions can be intractable for many nonrecursive models and impossible in the general case for recursive problems, several research groups are actively developing computational techniques for recursive stochastic languages. We have developed an extension to the traditional...
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.
The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS...
The Probability Model of Expectation Disconfirmation Process
Directory of Open Access Journals (Sweden)
Hui-Hsin HUANG
2015-06-01
Full Text Available This paper proposes a probability model to explore the dynamic process of customer’s satisfaction. Bases on expectation disconfirmation theory, the satisfaction is constructed with customer’s expectation before buying behavior and the perceived performance after purchase. The experiment method is designed to measure expectation disconfirmation effects and we also use the collection data to estimate the overall satisfaction and model calibration. The results show good fitness between the model and the real data. This model has application for business marketing areas in order to manage relationship satisfaction.
A quantum probability model of causal reasoning
Directory of Open Access Journals (Sweden)
Jennifer S Trueblood
2012-05-01
Full Text Available People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause with diagnostic judgments (i.e., the conditional probability of a cause given an effect. The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment.
Statistical physics of pairwise probability models
DEFF Research Database (Denmark)
Roudi, Yasser; Aurell, Erik; Hertz, John
2009-01-01
(dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...... and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring...
Archaeological predictive model set.
2015-03-01
This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...
Predicting Cumulative Incidence Probability: Marginal and Cause-Specific Modelling
DEFF Research Database (Denmark)
Scheike, Thomas H.; Zhang, Mei-Jie
2005-01-01
cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling......cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling...
Economic communication model set
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
Statistical physics of pairwise probability models
Directory of Open Access Journals (Sweden)
Yasser Roudi
2009-11-01
Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.
Compositional models for credal sets
Czech Academy of Sciences Publication Activity Database
Vejnarová, Jiřina
2017-01-01
Roč. 90, č. 1 (2017), s. 359-373 ISSN 0888-613X R&D Projects: GA ČR(CZ) GA16-12010S Institutional support: RVO:67985556 Keywords : Imprecise probabilities * Credal sets * Multidimensional models * Conditional independence Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 2.845, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/vejnarova-0483288.pdf
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
King, James M.; And Others
The materials described here represent the conversion of a highly popular student workbook "Sets, Probability and Statistics: The Mathematics of Life Insurance" into a computer program. The program is designed to familiarize students with the concepts of sets, probability, and statistics, and to provide practice using real life examples. It also…
Geometric modeling in probability and statistics
Calin, Ovidiu
2014-01-01
This book covers topics of Informational Geometry, a field which deals with the differential geometric study of the manifold probability density functions. This is a field that is increasingly attracting the interest of researchers from many different areas of science, including mathematics, statistics, geometry, computer science, signal processing, physics and neuroscience. It is the authors’ hope that the present book will be a valuable reference for researchers and graduate students in one of the aforementioned fields. This textbook is a unified presentation of differential geometry and probability theory, and constitutes a text for a course directed at graduate or advanced undergraduate students interested in applications of differential geometry in probability and statistics. The book contains over 100 proposed exercises meant to help students deepen their understanding, and it is accompanied by software that is able to provide numerical computations of several information geometric objects. The reader...
Ruin Probabilities in the Mixed Claim Frequency Risk Models
Directory of Open Access Journals (Sweden)
Zhao Xiaoqin
2014-01-01
Full Text Available We consider two mixed claim frequency risk models. Some important probabilistic properties are obtained by probability-theory methods. Some important results about ruin probabilities are obtained by martingale approach.
Directory of Open Access Journals (Sweden)
Matthew Nahorniak
Full Text Available In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB. Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we
Directory of Open Access Journals (Sweden)
Wenchao Cui
2013-01-01
Full Text Available This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP and Bayes’ rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS
Directory of Open Access Journals (Sweden)
T. I. Aliev
2013-03-01
Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.
Calculating the Probability of Returning a Loan with Binary Probability Models
Directory of Open Access Journals (Sweden)
Julian Vasilev
2014-12-01
Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.
EVOLVE : a Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II
Coello, Carlos; Tantar, Alexandru-Adrian; Tantar, Emilia; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; EVOLVE 2012
2013-01-01
This book comprises a selection of papers from the EVOLVE 2012 held in Mexico City, Mexico. The aim of the EVOLVE is to build a bridge between probability, set oriented numerics and evolutionary computing, as to identify new common and challenging research aspects. The conference is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. EVOLVE is intended to unify theory-inspired methods and cutting-edge techniques ensuring performance guarantee factors. By gathering researchers with different backgrounds, a unified view and vocabulary can emerge where the theoretical advancements may echo in different domains. Summarizing, the EVOLVE focuses on challenging aspects arising at the passage from theory to new paradigms and aims to provide a unified view while raising questions related to reliability, performance guarantees and modeling. The papers of the EVOLVE 2012 make a contribution to this goal.
Stochastic population dynamic models as probability networks
M.E. and D.C. Lee. Borsuk
2009-01-01
The dynamics of a population and its response to environmental change depend on the balance of birth, death and age-at-maturity, and there have been many attempts to mathematically model populations based on these characteristics. Historically, most of these models were deterministic, meaning that the results were strictly determined by the equations of the model and...
Sampling, Probability Models and Statistical Reasoning -RE ...
Indian Academy of Sciences (India)
few units to the specified standards and criteria and based on these few .... Let us define the function L(9) by. L(9) = P (X = k/S) , where k is the observed number of defectives (which is known now). Note thatL is a nonnegative function defined on the set of ... Note that, smaller the standard deviation better the estimate on.
PROBABILITY DISTRIBUTION OVER THE SET OF CLASSES IN ARABIC DIALECT CLASSIFICATION TASK
Directory of Open Access Journals (Sweden)
O. V. Durandin
2017-01-01
Full Text Available Subject of Research.We propose an approach for solving machine learning classification problem that uses the information about the probability distribution on the training data class label set. The algorithm is illustrated on a complex natural language processing task - classification of Arabic dialects. Method. Each object in the training set is associated with a probability distribution over the class label set instead of a particular class label. The proposed approach solves the classification problem taking into account the probability distribution over the class label set to improve the quality of the built classifier. Main Results. The suggested approach is illustrated on the automatic Arabic dialects classification example. Mined from the Twitter social network, the analyzed data contain word-marks and belong to the following six Arabic dialects: Saudi, Levantine, Algerian, Egyptian, Iraq, Jordan, and to the modern standard Arabic (MSA. The paper results demonstrate an increase of the quality of the built classifier achieved by taking into account probability distributions over the set of classes. Experiments carried out show that even relatively naive accounting of the probability distributions improves the precision of the classifier from 44% to 67%. Practical Relevance. Our approach and corresponding algorithm could be effectively used in situations when a manual annotation process performed by experts is connected with significant financial and time resources, but it is possible to create a system of heuristic rules. The implementation of the proposed algorithm enables to decrease significantly the data preparation expenses without substantial losses in the precision of the classification.
Development and evaluation of probability density functions for a set of human exposure factors
International Nuclear Information System (INIS)
Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.
1999-01-01
The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors
Development and evaluation of probability density functions for a set of human exposure factors
Energy Technology Data Exchange (ETDEWEB)
Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.
1999-06-01
The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.
Modeling Uncertain Context Information via Event Probabilities
van Bunningen, A.H.; Feng, L.; Apers, Peter M.G.; de Keijzer, Ander; de Keijzer, A.; van Keulen, M.; van Keulen, Maurice
2006-01-01
To be able to support context-awareness in an Ambient Intelligent (AmI) environment, one needs a way to model context. Previous research shows that a good way to model context is using Description Logics (DL). Since context data is often coming from sensors and therefore exhibits uncertain
Modelling spruce bark beetle infestation probability
Paulius Zolubas; Jose Negron; A. Steven Munson
2009-01-01
Spruce bark beetle (Ips typographus L.) risk model, based on pure Norway spruce (Picea abies Karst.) stand characteristics in experimental and control plots was developed using classification and regression tree statistical technique under endemic pest population density. The most significant variable in spruce bark beetle...
Some simple applications of probability models to birth intervals
International Nuclear Information System (INIS)
Shrestha, G.
1987-07-01
An attempt has been made in this paper to apply some simple probability models to birth intervals under the assumption of constant fecundability and varying fecundability among women. The parameters of the probability models are estimated by using the method of moments and the method of maximum likelihood. (author). 9 refs, 2 tabs
A Probability-Based Hybrid User Model for Recommendation System
Directory of Open Access Journals (Sweden)
Jia Hao
2016-01-01
Full Text Available With the rapid development of information communication technology, the available information or knowledge is exponentially increased, and this causes the well-known information overload phenomenon. This problem is more serious in product design corporations because over half of the valuable design time is consumed in knowledge acquisition, which highly extends the design cycle and weakens the competitiveness. Therefore, the recommender systems become very important in the domain of product domain. This research presents a probability-based hybrid user model, which is a combination of collaborative filtering and content-based filtering. This hybrid model utilizes user ratings and item topics or classes, which are available in the domain of product design, to predict the knowledge requirement. The comprehensive analysis of the experimental results shows that the proposed method gains better performance in most of the parameter settings. This work contributes a probability-based method to the community for implement recommender system when only user ratings and item topics are available.
Parametric modeling of probability of bank loan default in Kenya ...
African Journals Online (AJOL)
This makes the study on probability of a customer defaulting very useful while analyzing the credit risk policies. In this paper, we use a raw data set that contains demographic information about the borrowers. The data sets have been used to identify which risk factors associated with the borrowers contribute towards default.
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.
Probability model for analyzing fire management alternatives: theory and structure
Frederick W. Bratten
1982-01-01
A theoretical probability model has been developed for analyzing program alternatives in fire management. It includes submodels or modules for predicting probabilities of fire behavior, fire occurrence, fire suppression, effects of fire on land resources, and financial effects of fire. Generalized "fire management situations" are used to represent actual fire...
Convergence of Transition Probability Matrix in CLVMarkov Models
Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.
2018-04-01
A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.
Review of Literature for Model Assisted Probability of Detection
Energy Technology Data Exchange (ETDEWEB)
Meyer, Ryan M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Crawford, Susan L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lareau, John P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Anderson, Michael T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2014-09-30
This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.
Deriving the probability of a linear opinion pooling method being superior to a set of alternatives
International Nuclear Information System (INIS)
Bolger, Donnacha; Houlding, Brett
2017-01-01
Linear opinion pools are a common method for combining a set of distinct opinions into a single succinct opinion, often to be used in a decision making task. In this paper we consider a method, termed the Plug-in approach, for determining the weights to be assigned in this linear pool, in a manner that can be deemed as rational in some sense, while incorporating multiple forms of learning over time into its process. The environment that we consider is one in which every source in the pool is herself a decision maker (DM), in contrast to the more common setting in which expert judgments are amalgamated for use by a single DM. We discuss a simulation study that was conducted to show the merits of our technique, and demonstrate how theoretical probabilistic arguments can be used to exactly quantify the probability of this technique being superior (in terms of a probability density metric) to a set of alternatives. Illustrations are given of simulated proportions converging to these true probabilities in a range of commonly used distributional cases. - Highlights: • A novel context for combination of expert opinion is provided. • A dynamic reliability assessment method is stated, justified by properties and a data study. • The theoretical grounding underlying the data-driven justification is explored. • We conclude with areas for expansion and further relevant research.
Models for probability and statistical inference theory and applications
Stapleton, James H
2007-01-01
This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses mo...
Scaling Qualitative Probability
Burgin, Mark
2017-01-01
There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...
Probable relationship between partitions of the set of codons and the origin of the genetic code.
Salinas, Dino G; Gallardo, Mauricio O; Osorio, Manuel I
2014-03-01
Here we study the distribution of randomly generated partitions of the set of amino acid-coding codons. Some results are an application from a previous work, about the Stirling numbers of the second kind and triplet codes, both to the cases of triplet codes having four stop codons, as in mammalian mitochondrial genetic code, and hypothetical doublet codes. Extending previous results, in this work it is found that the most probable number of blocks of synonymous codons, in a genetic code, is similar to the number of amino acids when there are four stop codons, as well as it could be for a primigenious doublet code. Also it is studied the integer partitions associated to patterns of synonymous codons and it is shown, for the canonical code, that the standard deviation inside an integer partition is one of the most probable. We think that, in some early epoch, the genetic code might have had a maximum of the disorder or entropy, independent of the assignment between codons and amino acids, reaching a state similar to "code freeze" proposed by Francis Crick. In later stages, maybe deterministic rules have reassigned codons to amino acids, forming the natural codes, such as the canonical code, but keeping the numerical features describing the set partitions and the integer partitions, like a "fossil numbers"; both kinds of partitions about the set of amino acid-coding codons. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Modeling the probability of giving birth at health institutions among ...
African Journals Online (AJOL)
Background: Although ante natal care and institutional delivery is effective means for reducing maternal morbidity and mortality, the probability of giving birth at health institutions among ante natal care attendants has not been modeled in Ethiopia. Therefore, the objective of this study was to model predictors of giving birth at ...
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
Illustrating Probability through Roulette: A Spreadsheet Simulation Model
Directory of Open Access Journals (Sweden)
Kala Chand Seal
2005-11-01
Full Text Available Teaching probability can be challenging because the mathematical formulas often are too abstract and complex for the students to fully grasp the underlying meaning and effect of the concepts. Games can provide a way to address this issue. For example, the game of roulette can be an exciting application for teaching probability concepts. In this paper, we implement a model of roulette in a spreadsheet that can simulate outcomes of various betting strategies. The simulations can be analyzed to gain better insights into the corresponding probability structures. We use the model to simulate a particular betting strategy known as the bet-doubling, or Martingale, strategy. This strategy is quite popular and is often erroneously perceived as a winning strategy even though the probability analysis shows that such a perception is incorrect. The simulation allows us to present the true implications of such a strategy for a player with a limited betting budget and relate the results to the underlying theoretical probability structure. The overall validation of the model, its use for teaching, including its application to analyze other types of betting strategies are discussed.
TPmsm: Estimation of the Transition Probabilities in 3-State Models
Directory of Open Access Journals (Sweden)
Artur Araújo
2014-12-01
Full Text Available One major goal in clinical applications of multi-state models is the estimation of transition probabilities. The usual nonparametric estimator of the transition matrix for non-homogeneous Markov processes is the Aalen-Johansen estimator (Aalen and Johansen 1978. However, two problems may arise from using this estimator: first, its standard error may be large in heavy censored scenarios; second, the estimator may be inconsistent if the process is non-Markovian. The development of the R package TPmsm has been motivated by several recent contributions that account for these estimation problems. Estimation and statistical inference for transition probabilities can be performed using TPmsm. The TPmsm package provides seven different approaches to three-state illness-death modeling. In two of these approaches the transition probabilities are estimated conditionally on current or past covariate measures. Two real data examples are included for illustration of software usage.
Reach/frequency for printed media: Personal probabilities or models
DEFF Research Database (Denmark)
Mortensen, Peter Stendahl
2000-01-01
The author evaluates two different ways of estimating reach and frequency of plans for printed media. The first assigns reading probabilities to groups of respondents and calculates reach and frequency by simulation. the second estimates parameters to a model for reach/frequency. It is concluded ...
Probability density estimation in stochastic environmental models using reverse representations
Van den Berg, E.; Heemink, A.W.; Lin, H.X.; Schoenmakers, J.G.M.
2003-01-01
The estimation of probability densities of variables described by systems of stochastic dierential equations has long been done using forward time estimators, which rely on the generation of realizations of the model, forward in time. Recently, an estimator based on the combination of forward and
Flight Overbooking Problem–Use of a Probability Model
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Flight Overbooking Problem – Use of a Probability Model. T Krishnan. Classroom Volume 7 Issue 3 March 2002 pp 56-60. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/007/03/0056-0060 ...
Modeling highway travel time distribution with conditional probability models
Energy Technology Data Exchange (ETDEWEB)
Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)
2014-01-01
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).
High-resolution urban flood modelling - a joint probability approach
Hartnett, Michael; Olbert, Agnieszka; Nash, Stephen
2017-04-01
(Divoky et al., 2005). Nevertheless, such events occur and in Ireland alone there are several cases of serious damage due to flooding resulting from a combination of high sea water levels and river flows driven by the same meteorological conditions (e.g. Olbert et al. 2015). A November 2009 fluvial-coastal flooding of Cork City bringing €100m loss was one such incident. This event was used by Olbert et al. (2015) to determine processes controlling urban flooding and is further explored in this study to elaborate on coastal and fluvial flood mechanisms and their roles in controlling water levels. The objective of this research is to develop a methodology to assess combined effect of multiple source flooding on flood probability and severity in urban areas and to establish a set of conditions that dictate urban flooding due to extreme climatic events. These conditions broadly combine physical flood drivers (such as coastal and fluvial processes), their mechanisms and thresholds defining flood severity. The two main physical processes controlling urban flooding: high sea water levels (coastal flooding) and high river flows (fluvial flooding), and their threshold values for which flood is likely to occur, are considered in this study. Contribution of coastal and fluvial drivers to flooding and their impacts are assessed in a two-step process. The first step involves frequency analysis and extreme value statistical modelling of storm surges, tides and river flows and ultimately the application of joint probability method to estimate joint exceedence return periods for combination of surges, tide and river flows. In the second step, a numerical model of Cork Harbour MSN_Flood comprising a cascade of four nested high-resolution models is used to perform simulation of flood inundation under numerous hypothetical coastal and fluvial flood scenarios. The risk of flooding is quantified based on a range of physical aspects such as the extent and depth of inundation (Apel et al
A verification system survival probability assessment model test methods
International Nuclear Information System (INIS)
Jia Rui; Wu Qiang; Fu Jiwei; Cao Leituan; Zhang Junnan
2014-01-01
Subject to the limitations of funding and test conditions, the number of sub-samples of large complex system test less often. Under the single sample conditions, how to make an accurate evaluation of the performance, it is important for reinforcement of complex systems. It will be able to significantly improve the technical maturity of the assessment model, if that can experimental validation and evaluation model. In this paper, a verification system survival probability assessment model test method, the method by the test system sample test results, verify the correctness of the assessment model and a priori information. (authors)
Wang, Xing M.
2009-01-01
After a brief introduction to Probability Bracket Notation (PBN), indicator operator and conditional density operator (CDO), we investigate probability spaces associated with various quantum systems: system with one observable (discrete or continuous), system with two commutative observables (independent or dependent) and a system of indistinguishable non-interacting many-particles. In each case, we derive unified expressions of conditional expectation (CE), conditional probability (CP), and ...
A propagation model of computer virus with nonlinear vaccination probability
Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi
2014-01-01
This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.
Simplifying Probability Elicitation and Uncertainty Modeling in Bayesian Networks
Energy Technology Data Exchange (ETDEWEB)
Paulson, Patrick R; Carroll, Thomas E; Sivaraman, Chitra; Neorr, Peter A; Unwin, Stephen D; Hossain, Shamina S
2011-04-16
In this paper we contribute two methods that simplify the demands of knowledge elicitation for particular types of Bayesian networks. The first method simplify the task of providing probabilities when the states that a random variable takes can be described by a new, fully ordered state set in which a state implies all the preceding states. The second method leverages Dempster-Shafer theory of evidence to provide a way for the expert to express the degree of ignorance that they feel about the estimates being provided.
Statistical Validation of Normal Tissue Complication Probability Models
Energy Technology Data Exchange (ETDEWEB)
Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)
2012-09-01
Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.
Mortality Probability Model III and Simplified Acute Physiology Score II
Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams
2009-01-01
Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210
Assigning probability distributions to input parameters of performance assessment models
International Nuclear Information System (INIS)
Mishra, Srikanta
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available
Assigning probability distributions to input parameters of performance assessment models
Energy Technology Data Exchange (ETDEWEB)
Mishra, Srikanta [INTERA Inc., Austin, TX (United States)
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.
Modeling spatial variation in avian survival and residency probabilities
Saracco, James F.; Royle, J. Andrew; DeSante, David F.; Gardner, Beth
2010-01-01
The importance of understanding spatial variation in processes driving animal population dynamics is widely recognized. Yet little attention has been paid to spatial modeling of vital rates. Here we describe a hierarchical spatial autoregressive model to provide spatially explicit year-specific estimates of apparent survival (phi) and residency (pi) probabilities from capture-recapture data. We apply the model to data collected on a declining bird species, Wood Thrush (Hylocichla mustelina), as part of a broad-scale bird-banding network, the Monitoring Avian Productivity and Survivorship (MAPS) program. The Wood Thrush analysis showed variability in both phi and pi among years and across space. Spatial heterogeneity in residency probability was particularly striking, suggesting the importance of understanding the role of transients in local populations. We found broad-scale spatial patterning in Wood Thrush phi and pi that lend insight into population trends and can direct conservation and research. The spatial model developed here represents a significant advance over approaches to investigating spatial pattern in vital rates that aggregate data at coarse spatial scales and do not explicitly incorporate spatial information in the model. Further development and application of hierarchical capture-recapture models offers the opportunity to more fully investigate spatiotemporal variation in the processes that drive population changes.
Calvert, Carol Elaine
2014-01-01
This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…
Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.
Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih
2016-10-01
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.
Human Inferences about Sequences: A Minimal Transition Probability Model.
Directory of Open Access Journals (Sweden)
Florent Meyniel
2016-12-01
Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.
NASA Lewis Launch Collision Probability Model Developed and Analyzed
Bollenbacher, Gary; Guptill, James D
1999-01-01
There are nearly 10,000 tracked objects orbiting the earth. These objects encompass manned objects, active and decommissioned satellites, spent rocket bodies, and debris. They range from a few centimeters across to the size of the MIR space station. Anytime a new satellite is launched, the launch vehicle with its payload attached passes through an area of space in which these objects orbit. Although the population density of these objects is low, there always is a small but finite probability of collision between the launch vehicle and one or more of these space objects. Even though the probability of collision is very low, for some payloads even this small risk is unacceptable. To mitigate the small risk of collision associated with launching at an arbitrary time within the daily launch window, NASA performs a prelaunch mission assurance Collision Avoidance Analysis (or COLA). For the COLA of the Cassini spacecraft, the NASA Lewis Research Center conducted an in-house development and analysis of a model for launch collision probability. The model allows a minimum clearance criteria to be used with the COLA analysis to ensure an acceptably low probability of collision. If, for any given liftoff time, the nominal launch vehicle trajectory would pass a space object with less than the minimum required clearance, launch would not be attempted at that time. The model assumes that the nominal positions of the orbiting objects and of the launch vehicle can be predicted as a function of time, and therefore, that any tracked object that comes within close proximity of the launch vehicle can be identified. For any such pair, these nominal positions can be used to calculate a nominal miss distance. The actual miss distances may differ substantially from the nominal miss distance, due, in part, to the statistical uncertainty of the knowledge of the objects positions. The model further assumes that these position uncertainties can be described with position covariance matrices
Directory of Open Access Journals (Sweden)
Isabel C. Pérez Hoyos
2016-04-01
Full Text Available Groundwater Dependent Ecosystems (GDEs are increasingly threatened by humans’ rising demand for water resources. Consequently, it is imperative to identify the location of GDEs to protect them. This paper develops a methodology to identify the probability of an ecosystem to be groundwater dependent. Probabilities are obtained by modeling the relationship between the known locations of GDEs and factors influencing groundwater dependence, namely water table depth and climatic aridity index. Probabilities are derived for the state of Nevada, USA, using modeled water table depth and aridity index values obtained from the Global Aridity database. The model selected results from the performance comparison of classification trees (CT and random forests (RF. Based on a threshold-independent accuracy measure, RF has a better ability to generate probability estimates. Considering a threshold that minimizes the misclassification rate for each model, RF also proves to be more accurate. Regarding training accuracy, performance measures such as accuracy, sensitivity, and specificity are higher for RF. For the test set, higher values of accuracy and kappa for CT highlight the fact that these measures are greatly affected by low prevalence. As shown for RF, the choice of the cutoff probability value has important consequences on model accuracy and the overall proportion of locations where GDEs are found.
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Modeling normal-tissue complication probabilities: where are we now?
International Nuclear Information System (INIS)
Bentzen, Soeren M.
1995-01-01
Radiation-induced normal-tissue morbidity is the ultimate limitation to the obtainable loco-regional tumor control in radiation oncology, and throughout the 100-year history of this specialty an improved understanding of the radiobiology of normal tissues has been a high priority. To this end mathematical modeling of normal-tissue complication probabilities (NTCP) has been an important tool. This lecture will present a status of four current research areas: 1) The use of the linear-quadratic (LQ) model for the description of dose fractionation and dose-rate effects. The value of the LQ model, at least in the range of dose per fraction from 1-5 Gy, is relatively well-established from clinical studies. Yet, as soon as we turn to prediction, the lack of α/β estimates for many clinical endpoints and the relatively poor precision of the available estimates pose serious limitations. 2) The volume effect for NTCP. Despite considerable modeling efforts, the current models seem to be much too crude when compared with our biological understanding of the clinical volume effect. 3) NTCP models of proliferative organization. Progress in cell kinetics and in the understanding of homeostatic feed-back mechanisms in normal tissues have renewed the interest in tissue structure models of the Wheldon-Michalowski-Kirk type. This may in turn improve our ability to predict latent-times of normal-tissue reactions and re-irradiation tolerance after long time intervals. 4) Patient-to-patient variability in the response to radiotherapy. The evidence is growing that a number of both extrinsic and intrinsic factors affect the NTCP of individual patients. This will shortly be reviewed. As our biological knowledge is rapidly growing in all of these areas, the current generation of models may have to be considerably refined
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2017-07-01
Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.
On New Cautious Structural Reliability Models in the Framework of imprecise Probabilities
DEFF Research Database (Denmark)
Utkin, Lev V.; Kozine, Igor
2010-01-01
Uncertainty of parameters in engineering design has been modeled in different frameworks such as inter-val analysis, fuzzy set and possibility theories, ran-dom set theory and imprecise probability theory. The authors of this paper for many years have been de-veloping new imprecise reliability......? Developing models enabling to answer these two questions has been in the focus of the new research the results of which are described in the paper. In this paper we describe new models for com-puting structural reliability based on measurements of values of stress and strength and taking account of the fact...... that the number of observations may be ra-ther small. The approach to developing the models is based on using the imprecise Bayesian inference models (Walley 1996). These models provide a rich supply of coherent imprecise inferences that are ex-pressed in terms of posterior upper and lower prob...
LAMP-B: a Fortran program set for the lattice cell analysis by collision probability method
International Nuclear Information System (INIS)
Tsuchihashi, Keiichiro
1979-02-01
Nature of physical problem solved: LAMB-B solves an integral transport equation by the collision probability method for many variety of lattice cell geometries: spherical, plane and cylindrical lattice cell; square and hexagonal arrays of pin rods; annular clusters and square clusters. LAMP-B produces homogenized constants for multi and/or few group diffusion theory programs. Method of solution: LAMP-B performs an exact numerical integration to obtain the collision probabilities. Restrictions on the complexity of the problem: Not more than 68 group in the fast group calculation, and not more than 20 regions in the resonance integral calculation. Typical running time: It varies with the number of energy groups and the selection of the geometry. Unusual features of the program: Any or any combination of constituent subprograms can be used so that the partial use of this program is available. (author)
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Probability models of the x, y pole coordinates data
Sen, A.; Niedzielski, T.; Kosek, W.
2008-12-01
The aim of this study is to find the most appropriate probabilistic model for the x, y pole coordinates time series. We have analyzed the IERS eopc04_05 data set covering the period 1962 - 2008. We have also considered the residual x, y pole coordinate time series, which is computed as the difference between the original data and the corresponding least-squares model of the Chandler circle and annual ellipse. Using the measures of skewness and kurtosis of the empirical distribution of the data, we find that the x, y pole coordinates and the corresponding residuals time series cannot be modeled by a normal (Gaussian) distribution. A normal distribution has zero skewness and a kurtosis value of 3. We have fitted several non- Gaussian distributions to the datasets. They include the Generalized Extreme Value distribution, 4-parameter Beta distribution, Johnson SB and SU distributions, Generalized Pareto distribution and Wakeby distribution. Suitability of these distributions as probabilistic models for the x, y pole coordinates and the corresponding residuals time series are discussed.
Options and pitfalls of normal tissues complication probability models
International Nuclear Information System (INIS)
Dorr, Wolfgang
2011-01-01
Full text: Technological improvements in the physical administration of radiotherapy have led to increasing conformation of the treatment volume (TV) with the planning target volume (PTV) and of the irradiated volume (IV) with the TV. In this process of improvement of the physical quality of radiotherapy, the total volumes of organs at risk exposed to significant doses have significantly decreased, resulting in increased inhomogeneities in the dose distributions within these organs. This has resulted in a need to identify and quantify volume effects in different normal tissues. Today, irradiated volume today must be considered a 6t h 'R' of radiotherapy, in addition to the 5 'Rs' defined by Withers and Steel in the mid/end 1980 s. The current status of knowledge of these volume effects has recently been summarized for many organs and tissues by the QUANTEC (Quantitative Analysis of Normal Tissue Effects in the Clinic) initiative [Int. J. Radiat. Oncol. BioI. Phys. 76 (3) Suppl., 2010]. However, the concept of using dose-volume histogram parameters as a basis for dose constraints, even without applying any models for normal tissue complication probabilities (NTCP), is based on (some) assumptions that are not met in clinical routine treatment planning. First, and most important, dose-volume histogram (DVH) parameters are usually derived from a single, 'snap-shot' CT-scan, without considering physiological (urinary bladder, intestine) or radiation induced (edema, patient weight loss) changes during radiotherapy. Also, individual variations, or different institutional strategies of delineating organs at risk are rarely considered. Moreover, the reduction of the 3-dimentional dose distribution into a '2dimensl' DVH parameter implies that the localization of the dose within an organ is irrelevant-there are ample examples that this assumption is not justified. Routinely used dose constraints also do not take into account that the residual function of an organ may be
Blocking probability in the hose-model optical VPN with different number of wavelengths
Roslyakov, Alexander V.
2017-04-01
Connection setup with guaranteed quality of service (QoS) in the optical virtual private network (OVPN) is a major goal for the network providers. In order to support this we propose a QoS based OVPN connection set up mechanism over WDM network to the end customer. The proposed WDM network model can be specified in terms of QoS parameter such as blocking probability. We estimated this QoS parameter based on the hose-model OVPN. In this mechanism the OVPN connections also can be created or deleted according to the availability of the wavelengths in the optical path. In this paper we have considered the impact of the number of wavelengths on the computation of blocking probability. The goal of the work is to dynamically provide a best OVPN connection during frequent arrival of connection requests with QoS requirements.
Directory of Open Access Journals (Sweden)
Gabriel Rodríguez
2016-06-01
Full Text Available Following Xu and Perron (2014, I applied the extended RLS model to the daily stock market returns of Argentina, Brazil, Chile, Mexico and Peru. This model replaces the constant probability of level shifts for the entire sample with varying probabilities that record periods with extremely negative returns. Furthermore, it incorporates a mean reversion mechanism with which the magnitude and the sign of the level shift component vary in accordance with past level shifts that deviate from the long-term mean. Therefore, four RLS models are estimated: the Basic RLS, the RLS with varying probabilities, the RLS with mean reversion, and a combined RLS model with mean reversion and varying probabilities. The results show that the estimated parameters are highly significant, especially that of the mean reversion model. An analysis of ARFIMA and GARCH models is also performed in the presence of level shifts, which shows that once these shifts are taken into account in the modeling, the long memory characteristics and GARCH effects disappear. Also, I find that the performance prediction of the RLS models is superior to the classic models involving long memory as the ARFIMA(p,d,q models, the GARCH and the FIGARCH models. The evidence indicates that except in rare exceptions, the RLS models (in all its variants are showing the best performance or belong to the 10% of the Model Confidence Set (MCS. On rare occasions the GARCH and the ARFIMA models appear to dominate but they are rare exceptions. When the volatility is measured by the squared returns, the great exception is Argentina where a dominance of GARCH and FIGARCH models is appreciated.
Directory of Open Access Journals (Sweden)
Samy Ismail Elmahdy
2016-01-01
Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.
Prompt- and delayed-fission probabilities in actinides within a doorway state model
International Nuclear Information System (INIS)
Bellia, G.; Del Zoppo, A.; Migneco, E.; Russo, G.
1984-01-01
The formalism of the doorway state model for fission is extended to excitation energy as low as the pairing gap region in the intermediate well of the potential barrier. Expressions for the wave function amplitudes which describe the various possible admixtures of the vibrational modes into the compound-state sets are generalized and can be used to calculate prompt- and delayed-fission probabilities in the energy range between the isomer level and the top of the barrier. The occurrence of different coupling situations for the 238 U 1 - 0 channel is discussed
Prompt- and delayed-fission probabilities in actinides within a doorway state model
Energy Technology Data Exchange (ETDEWEB)
Bellia, G.; Del Zoppo, A.; Migneco, E.; Russo, G. (Istituto Nazionale di Fisica Nucleare, Catania (Italy); Catania Univ. (Italy). Ist. di Fisica)
1984-01-21
The formalism of the doorway state model for fission is extended to excitation energy as low as the pairing gap region in the intermediate well of the potential barrier. Expressions for the wave function amplitudes which describe the various possible admixtures of the vibrational modes into the compound-state sets are generalized and can be used to calculate prompt- and delayed-fission probabilities in the energy range between the isomer level and the top of the barrier. The occurrence of different coupling situations for the /sup 238/U 1/sup -/0 channel is discussed.
Inland waterway transport: modelling the probability of accidents
Roeleven, D.; Stipdonk, H.L.; Kok, Matthijs; de Vries, W.A.
1995-01-01
In 1989 the Dutch government started the project “Safety in Inland Waterway Transport” to establish a minimum safety level and to develop a model to assess the effect and effectiveness of new safety measures. This model is called the Risk Effect Model, and calculates the integral impacts of safety
Directory of Open Access Journals (Sweden)
Mark A Walker
2017-11-01
Full Text Available Ectopic heartbeats can trigger reentrant arrhythmias, leading to ventricular fibrillation and sudden cardiac death. Such events have been attributed to perturbed Ca2+ handling in cardiac myocytes leading to spontaneous Ca2+ release and delayed afterdepolarizations (DADs. However, the ways in which perturbation of specific molecular mechanisms alters the probability of ectopic beats is not understood. We present a multiscale model of cardiac tissue incorporating a biophysically detailed three-dimensional model of the ventricular myocyte. This model reproduces realistic Ca2+ waves and DADs driven by stochastic Ca2+ release channel (RyR gating and is used to study mechanisms of DAD variability. In agreement with previous experimental and modeling studies, key factors influencing the distribution of DAD amplitude and timing include cytosolic and sarcoplasmic reticulum Ca2+ concentrations, inwardly rectifying potassium current (IK1 density, and gap junction conductance. The cardiac tissue model is used to investigate how random RyR gating gives rise to probabilistic triggered activity in a one-dimensional myocyte tissue model. A novel spatial-average filtering method for estimating the probability of extreme (i.e. rare, high-amplitude stochastic events from a limited set of spontaneous Ca2+ release profiles is presented. These events occur when randomly organized clusters of cells exhibit synchronized, high amplitude Ca2+ release flux. It is shown how reduced IK1 density and gap junction coupling, as observed in heart failure, increase the probability of extreme DADs by multiple orders of magnitude. This method enables prediction of arrhythmia likelihood and its modulation by alterations of other cellular mechanisms.
Joint Probability Models of Radiology Images and Clinical Annotations
Arnold, Corey Wells
2009-01-01
Radiology data, in the form of images and reports, is growing at a high rate due to the introduction of new imaging modalities, new uses of existing modalities, and the growing importance of objective image information in the diagnosis and treatment of patients. This increase has resulted in an enormous set of image data that is richly annotated…
Canonical Probability Distributions for Model Building, Learning, and Inference
National Research Council Canada - National Science Library
Druzdzel, Marek J
2006-01-01
...) improvements of stochastic sampling algorithms based on importance sampling, and (3) practical applications of our general purpose decision modeling environment to diagnosis of complex systems...
Application of Probability Methods to Assess Crash Modeling Uncertainty
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Estimation and asymptotic theory for transition probabilities in markov renewal multi-state models
Spitoni, Cristian|info:eu-repo/dai/nl/304625957; Verduijn, Marion; Putter, Hein
2014-01-01
In this paper we discuss estimation of transition probabilities for semi-Markov multi-state models. Non-parametric and semi-parametric estimators of the transition probabilities for a large class of models (forward going models) are proposed. Large sample theory is derived using the functional delta
Logit models for the probability of winning football games
Directory of Open Access Journals (Sweden)
Alessandro Martins Alves
2011-12-01
Full Text Available Two ordinal logit models are applied to fit the results of matches in the Brazilian football championship. As explanatory variables are employed measures of previous performance of the teamsalong all preceding games, along recent games and when playing at home and as a visitor. The results of the models adjustment are employed in simulations performed to forecast the number of points to be earnedin the following games and to anticipate the teams' final classification.
DEFF Research Database (Denmark)
Azarang, Leyla; Scheike, Thomas; de Uña-Álvarez, Jacobo
2017-01-01
In this work, we present direct regression analysis for the transition probabilities in the possibly non-Markov progressive illness–death model. The method is based on binomial regression, where the response is the indicator of the occupancy for the given state along time. Randomly weighted score...
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Brémaud, Pierre
2017-01-01
The emphasis in this book is placed on general models (Markov chains, random fields, random graphs), universal methods (the probabilistic method, the coupling method, the Stein-Chen method, martingale methods, the method of types) and versatile tools (Chernoff's bound, Hoeffding's inequality, Holley's inequality) whose domain of application extends far beyond the present text. Although the examples treated in the book relate to the possible applications, in the communication and computing sciences, in operations research and in physics, this book is in the first instance concerned with theory. The level of the book is that of a beginning graduate course. It is self-contained, the prerequisites consisting merely of basic calculus (series) and basic linear algebra (matrices). The reader is not assumed to be trained in probability since the first chapters give in considerable detail the background necessary to understand the rest of the book. .
Some aspects of statistical modeling of human-error probability
International Nuclear Information System (INIS)
Prairie, R.R.
1982-01-01
Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element
Introduction to probability with R
Baclawski, Kenneth
2008-01-01
FOREWORD PREFACE Sets, Events, and Probability The Algebra of Sets The Bernoulli Sample Space The Algebra of Multisets The Concept of Probability Properties of Probability Measures Independent Events The Bernoulli Process The R Language Finite Processes The Basic Models Counting Rules Computing Factorials The Second Rule of Counting Computing Probabilities Discrete Random Variables The Bernoulli Process: Tossing a Coin The Bernoulli Process: Random Walk Independence and Joint Distributions Expectations The Inclusion-Exclusion Principle General Random Variable
An Empirical Model for Probability of Packet Reception in Vehicular Ad Hoc Networks
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available Today's advanced simulators facilitate thorough studies on VANETs but are hampered by the computational effort required to consider all of the important influencing factors. In particular, large-scale simulations involving thousands of communicating vehicles cannot be served in reasonable simulation times with typical network simulation frameworks. A solution to this challenge might be found in hybrid simulations that encapsulate parts of a discrete-event simulation in an analytical model while maintaining the simulation's credibility. In this paper, we introduce a hybrid simulation model that analytically represents the probability of packet reception in an IEEE 802.11p network based on four inputs: the distance between sender and receiver, transmission power, transmission rate, and vehicular traffic density. We also describe the process of building our model which utilizes a large set of simulation traces and is based on general linear least squares approximation techniques. The model is then validated via the comparison of simulation results with the model output. In addition, we present a transmission power control problem in order to show the model's suitability for solving parameter optimization problems, which are of fundamental importance to VANETs.
Setting limits on supersymmetry using simplified models
Gutschow, C.
2012-01-01
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical implications. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be re-cast in this manner into almost any theoretical framework, includ...
Directory of Open Access Journals (Sweden)
Ibsen Chivatá Cárdenas
2008-05-01
Full Text Available This article presents a rainfall model constructed by applying non-parametric modelling and imprecise probabilities; these tools were used because there was not enough homogeneous information in the study area. The area’s hydro-logical information regarding rainfall was scarce and existing hydrological time series were not uniform. A distributed extended rainfall model was constructed from so-called probability boxes (p-boxes, multinomial probability distribu-tion and confidence intervals (a friendly algorithm was constructed for non-parametric modelling by combining the last two tools. This model confirmed the high level of uncertainty involved in local rainfall modelling. Uncertainty en-compassed the whole range (domain of probability values thereby showing the severe limitations on information, leading to the conclusion that a detailed estimation of probability would lead to significant error. Nevertheless, rele-vant information was extracted; it was estimated that maximum daily rainfall threshold (70 mm would be surpassed at least once every three years and the magnitude of uncertainty affecting hydrological parameter estimation. This paper’s conclusions may be of interest to non-parametric modellers and decisions-makers as such modelling and imprecise probability represents an alternative for hydrological variable assessment and maybe an obligatory proce-dure in the future. Its potential lies in treating scarce information and represents a robust modelling strategy for non-seasonal stochastic modelling conditions
Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv
2012-01-01
This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
of it application on a social media maturity data-set. Specifically, we employ Necessary Condition Analysis (NCA) to identify maturity stage boundaries as necessary conditions and Qualitative Comparative Analysis (QCA) to arrive at multiple configurations that can be equally effective in progressing to higher......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...... characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration...
Directory of Open Access Journals (Sweden)
Moritz eBoos
2016-05-01
Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.
International Nuclear Information System (INIS)
Tercariol, Cesar Augusto Sangaletti; Kiipper, Felipe de Moura; Martinez, Alexandre Souto
2007-01-01
Consider that the coordinates of N points are randomly generated along the edges of a d-dimensional hypercube (random point problem). The probability P (d,N) m,n that an arbitrary point is the mth nearest neighbour to its own nth nearest neighbour (Cox probabilities) plays an important role in spatial statistics. Also, it has been useful in the description of physical processes in disordered media. Here we propose a simpler derivation of Cox probabilities, where we stress the role played by the system dimensionality d. In the limit d → ∞, the distances between pair of points become independent (random link model) and closed analytical forms for the neighbourhood probabilities are obtained both for the thermodynamic limit and finite-size system. Breaking the distance symmetry constraint drives us to the random map model, for which the Cox probabilities are obtained for two cases: whether a point is its own nearest neighbour or not
Berg, M. D.; Kim, H. S.; Friendlich, M. A.; Perez, C. E.; Seidlick, C. M.; LaBel, K. A.
2011-01-01
We present SEU test and analysis of the Microsemi ProASIC3 FPGA. SEU Probability models are incorporated for device evaluation. Included is a comparison to the RTAXS FPGA illustrating the effectiveness of the overall testing methodology.
Zhao, Feng; Zou, Kai; Shang, Hong; Ji, Zheng; Zhao, Huijie; Huang, Wenjiang; Li, Cunjun
2010-10-01
In this paper we present an analytical model for the computation of radiation transfer of discontinuous vegetation canopies. Some initial results of gap probability and bidirectional gap probability of discontinuous vegetation canopies, which are important parameters determining the radiative environment of the canopies, are given and compared with a 3- D computer simulation model. In the model, negative exponential attenuation of light within individual plant canopies is assumed. Then the computation of gap probability is resolved by determining the entry points and exiting points of the ray with the individual plants via their equations in space. For the bidirectional gap probability, which determines the single-scattering contribution of the canopy, a gap statistical analysis based model was adopted to correct the dependence of gap probabilities for both solar and viewing directions. The model incorporates the structural characteristics, such as plant sizes, leaf size, row spacing, foliage density, planting density, leaf inclination distribution. Available experimental data are inadequate for a complete validation of the model. So it was evaluated with a three dimensional computer simulation model for 3D vegetative scenes, which shows good agreement between these two models' results. This model should be useful to the quantification of light interception and the modeling of bidirectional reflectance distributions of discontinuous canopies.
International Nuclear Information System (INIS)
Yoshida, Yoshitaka; Ohtani, Masanori; Fujita, Yushi
2002-01-01
In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied
Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul Sf; Zhu, Tingshao
2015-01-01
Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric "Screening Efficiency" that were adopted to evaluate model effectiveness. Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30%. Individuals in China with high suicide
Knock probability estimation through an in-cylinder temperature model with exogenous noise
Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.
2018-01-01
This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.
Computation of the posterior probability of linkage using 'high effect' genetic model priors.
Logue, M W; Li, Y
2008-01-01
The posterior probability of linkage, or PPL, directly measures the probability that a disease gene is linked to a marker. By placing a Bayesian prior on the elements of the genetic model, it allows for an unknown genetic model without the inflationary effects of maximization. The standard technique uses essentially uniform priors over the elements of the penetrance vector. However, much of the parameter space corresponds to models that seem unlikely to yield substantial evidence for linkage: for example, models with very high phenocopy rates. A new class of priors on the elements of the genetic model is examined both theoretically and in simulations. These priors place 0% probability over models with low sibling relative risk, lambda(s). Focusing the prior probability on high lambda(s) models does tend to increase the mean PPL for linked markers, and to decrease the mean PPL for unlinked markers. However, the power to detect linkage remains virtually unchanged. Moreover, under these priors, the PPL occasionally yields unacceptably high values under no linkage. It appears important to retain prior probability over apparently 'uninformative' genetic models to accurately characterize the amount of evidence for linkage represented by the data. Copyright (c) 2008 S. Karger AG, Basel.
A market model: uncertainty and reachable sets
Directory of Open Access Journals (Sweden)
Raczynski Stanislaw
2015-01-01
Full Text Available Uncertain parameters are always present in models that include human factor. In marketing the uncertain consumer behavior makes it difficult to predict the future events and elaborate good marketing strategies. Sometimes uncertainty is being modeled using stochastic variables. Our approach is quite different. The dynamic market with uncertain parameters is treated using differential inclusions, which permits to determine the corresponding reachable sets. This is not a statistical analysis. We are looking for solutions to the differential inclusions. The purpose of the research is to find the way to obtain and visualise the reachable sets, in order to know the limits for the important marketing variables. The modeling method consists in defining the differential inclusion and find its solution, using the differential inclusion solver developed by the author. As the result we obtain images of the reachable sets where the main control parameter is the share of investment, being a part of the revenue. As an additional result we also can define the optimal investment strategy. The conclusion is that the differential inclusion solver can be a useful tool in market model analysis.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
Z → bb-bar probability and asymmetry in a model of dynamical electroweak symmetry breaking
International Nuclear Information System (INIS)
Arbuzov, B.A.; Osipov, M.Yu.
1997-01-01
The deviations from the standard model in the probability of Z → bb-bar decay and in the forward-backward asymmetry in the reaction e + e - → bb-bar are studied in the framework of the model of dynamical electroweak symmetry breaking, the basic point of which is the existence of a triple anomalous W-boson vertex in a region of momenta restricted by a cutoff. A set of equations for additional terms in the W b t-bar vertex is obtained and its solution to the process Z → bb-bar is applied. It is shown that it is possible to obtain a consistent description of both deviations, which is quite nontrivial because these effects are not simply correlated. The necessary value of the anomalous W interaction coupling, λ = -0.22 ± 0.01, is consistent with existing limitations and leads to definite predictions, e.g., for pair W production in e + e - collisions at LEP 200
Zhang, Yuanhui; Wu, Haipeng; Denton, Brian T; Wilson, James R; Lobo, Jennifer M
2017-10-27
Markov models are commonly used for decision-making studies in many application domains; however, there are no widely adopted methods for performing sensitivity analysis on such models with uncertain transition probability matrices (TPMs). This article describes two simulation-based approaches for conducting probabilistic sensitivity analysis on a given discrete-time, finite-horizon, finite-state Markov model using TPMs that are sampled over a specified uncertainty set according to a relevant probability distribution. The first approach assumes no prior knowledge of the probability distribution, and each row of a TPM is independently sampled from the uniform distribution on the row's uncertainty set. The second approach involves random sampling from the (truncated) multivariate normal distribution of the TPM's maximum likelihood estimators for its rows subject to the condition that each row has nonnegative elements and sums to one. The two sampling methods are easily implemented and have reasonable computation times. A case study illustrates the application of these methods to a medical decision-making problem involving the evaluation of treatment guidelines for glycemic control of patients with type 2 diabetes, where natural variation in a patient's glycated hemoglobin (HbA1c) is modeled as a Markov chain, and the associated TPMs are subject to uncertainty.
Setting Parameters for Biological Models With ANIMO
Directory of Open Access Journals (Sweden)
Stefano Schivo
2014-03-01
Full Text Available ANIMO (Analysis of Networks with Interactive MOdeling is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions between biological entities in form of a graph, while the parameters determine the speed of occurrence of such interactions. When a mismatch is observed between the behavior of an ANIMO model and experimental data, we want to update the model so that it explains the new data. In general, the topology of a model can be expanded with new (known or hypothetical nodes, and enables it to match experimental data. However, the unrestrained addition of new parts to a model causes two problems: models can become too complex too fast, to the point of being intractable, and too many parts marked as "hypothetical" or "not known" make a model unrealistic. Even if changing the topology is normally the easier task, these problems push us to try a better parameter fit as a first step, and resort to modifying the model topology only as a last resource. In this paper we show the support added in ANIMO to ease the task of expanding the knowledge on biological networks, concentrating in particular on the parameter settings.
Sulis, William H
2017-10-01
Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.
Directory of Open Access Journals (Sweden)
Changhao Fan
2017-01-01
Full Text Available In modeling, only information from the deviation between the output of the support vector regression (SVR model and the training sample is considered, whereas the other prior information of the training sample, such as probability distribution information, is ignored. Probabilistic distribution information describes the overall distribution of sample data in a training sample that contains different degrees of noise and potential outliers, as well as helping develop a high-accuracy model. To mine and use the probability distribution information of a training sample, a new support vector regression model that incorporates probability distribution information weight SVR (PDISVR is proposed. In the PDISVR model, the probability distribution of each sample is considered as the weight and is then introduced into the error coefficient and slack variables of SVR. Thus, the deviation and probability distribution information of the training sample are both used in the PDISVR model to eliminate the influence of noise and outliers in the training sample and to improve predictive performance. Furthermore, examples with different degrees of noise were employed to demonstrate the performance of PDISVR, which was then compared with those of three SVR-based methods. The results showed that PDISVR performs better than the three other methods.
Franceschetti, Donald R; Gire, Elizabeth
2013-06-01
Quantum probability theory offers a viable alternative to classical probability, although there are some ambiguities inherent in transferring the quantum formalism to a less determined realm. A number of physicists are now looking at the applicability of quantum ideas to the assessment of physics learning, an area particularly suited to quantum probability ideas.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Bayesian networks with a logistic regression model for the conditional probabilities
Rijmen, F.P.J.
2008-01-01
Logistic regression techniques can be used to restrict the conditional probabilities of a Bayesian network for discrete variables. More specifically, each variable of the network can be modeled through a logistic regression model, in which the parents of the variable define the covariates. When all
Diagnostic models of the pre-test probability of stable coronary artery disease: A systematic review
Directory of Open Access Journals (Sweden)
Ting He
Full Text Available A comprehensive search of PubMed and Embase was performed in January 2015 to examine the available literature on validated diagnostic models of the pre-test probability of stable coronary artery disease and to describe the characteristics of the models. Studies that were designed to develop and validate diagnostic models of pre-test probability for stable coronary artery disease were included. Data regarding baseline patient characteristics, procedural characteristics, modeling methods, metrics of model performance, risk of bias, and clinical usefulness were extracted. Ten studies involving the development of 12 models and two studies focusing on external validation were identified. Seven models were validated internally, and seven models were validated externally. Discrimination varied between studies that were validated internally (C statistic 0.66-0.81 and externally (0.49-0.87. Only one study presented reclassification indices. The majority of better performing models included sex, age, symptoms, diabetes, smoking, and hyperlipidemia as variables. Only two diagnostic models evaluated the effects on clinical decision making processes or patient outcomes. Most diagnostic models of the pre-test probability of stable coronary artery disease have had modest success, and very few present data regarding the effects of these models on clinical decision making processes or patient outcomes.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Lin, Guang [Department of Mathematics and School of Mechanical Engineering, Purdue University, West Lafayette Indiana USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2017-03-01
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters. To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.
Transition probabilities of health states for workers in Malaysia using a Markov chain model
Samsuddin, Shamshimah; Ismail, Noriszura
2017-04-01
The aim of our study is to estimate the transition probabilities of health states for workers in Malaysia who contribute to the Employment Injury Scheme under the Social Security Organization Malaysia using the Markov chain model. Our study uses four states of health (active, temporary disability, permanent disability and death) based on the data collected from the longitudinal studies of workers in Malaysia for 5 years. The transition probabilities vary by health state, age and gender. The results show that men employees are more likely to have higher transition probabilities to any health state compared to women employees. The transition probabilities can be used to predict the future health of workers in terms of a function of current age, gender and health state.
The probabilities of one- and multi-track events for modeling radiation-induced cell kill
Energy Technology Data Exchange (ETDEWEB)
Schneider, Uwe; Vasi, Fabiano; Besserer, Juergen [University of Zuerich, Department of Physics, Science Faculty, Zurich (Switzerland); Radiotherapy Hirslanden, Zurich (Switzerland)
2017-08-15
In view of the clinical importance of hypofractionated radiotherapy, track models which are based on multi-hit events are currently reinvestigated. These models are often criticized, because it is believed that the probability of multi-track hits is negligible. In this work, the probabilities for one- and multi-track events are determined for different biological targets. The obtained probabilities can be used with nano-dosimetric cluster size distributions to obtain the parameters of track models. We quantitatively determined the probabilities for one- and multi-track events for 100, 500 and 1000 keV electrons, respectively. It is assumed that the single tracks are statistically independent and follow a Poisson distribution. Three different biological targets were investigated: (1) a DNA strand (2 nm scale); (2) two adjacent chromatin fibers (60 nm); and (3) fiber loops (300 nm). It was shown that the probabilities for one- and multi-track events are increasing with energy, size of the sensitive target structure, and dose. For a 2 x 2 x 2 nm{sup 3} target, one-track events are around 10,000 times more frequent than multi-track events. If the size of the sensitive structure is increased to 100-300 nm, the probabilities for one- and multi-track events are of the same order of magnitude. It was shown that target theories can play a role for describing radiation-induced cell death if the targets are of the size of two adjacent chromatin fibers or fiber loops. The obtained probabilities can be used together with the nano-dosimetric cluster size distributions to determine model parameters for target theories. (orig.)
Particle filters for random set models
Ristic, Branko
2013-01-01
“Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based on the Monte Carlo statistical method. The resulting algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...
Modelling the Probability Density Function of IPTV Traffic Packet Delay Variation
Directory of Open Access Journals (Sweden)
Michal Halas
2012-01-01
Full Text Available This article deals with modelling the Probability density function of IPTV traffic packet delay variation. The use of this modelling is in an efficient de-jitter buffer estimation. When an IP packet travels across a network, it experiences delay and its variation. This variation is caused by routing, queueing systems and other influences like the processing delay of the network nodes. When we try to separate these at least three types of delay variation, we need a way to measure these types separately. This work is aimed to the delay variation caused by queueing systems which has the main implications to the form of the Probability density function.
Exact results for survival probability in the multistate Landau-Zener model
International Nuclear Information System (INIS)
Volkov, M V; Ostrovsky, V N
2004-01-01
An exact formula is derived for survival probability in the multistate Landau-Zener model in the special case where the initially populated state corresponds to the extremal (maximum or minimum) slope of a linear diabatic potential curve. The formula was originally guessed by S Brundobler and V Elzer (1993 J. Phys. A: Math. Gen. 26 1211) based on numerical calculations. It is a simple generalization of the expression for the probability of diabatic passage in the famous two-state Landau-Zener model. Our result is obtained via analysis and summation of the entire perturbation theory series
A recessive Mendelian model to predict carrier probabilities of DFNB1 for nonsyndromic deafness.
González, Juan R; Wang, Wenyi; Ballana, Ester; Estivill, Xavier
2006-11-01
Mutations in the DFNB1 locus, where two connexin genes are located (GJB2 and GJB6), account for half of congenital cases of nonsyndromic autosomal recessive deafness. Because of the high frequency of DFNB1 gene mutations and the availability of genetic diagnostic tests involving these genes, they are the best candidates to develop a risk prediction model of being hearing impaired. People undergoing genetic counseling are normally interested in knowing the probability of having a hearing impaired child given his/her family history. To address this, a Mendelian model that predicts the probability of being a carrier of DFNB1 mutations, using family history of deafness, has been developed. This probability will be useful as additional information to decide whether or not a genetic test should be performed. This model incorporates Mendelian mode of inheritance, the age of onset of the disease, and the current age of hearing family members. The carrier probabilities are obtained using Bayes' theorem, in which mutation prevalence is used as the prior distribution. We have validated our model by using information from 305 families affected with congenital or progressive nonsyndromic deafness, in which genetic analysis of GJB2 and GJB6 had already been performed. This model works well, especially in homozygous carriers, showing a high discriminative power. This indicates that our proposed model can be useful in the context of clinical counseling of autosomal recessive disorders. (c) 2006 Wiley-Liss, Inc.
A generic probability based model to derive regional patterns of crops in time and space
Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan
2015-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Energy Technology Data Exchange (ETDEWEB)
Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
An extended car-following model considering random safety distance with different probabilities
Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi
2018-02-01
Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.
Nixon, Zachary; Michel, Jacqueline
2015-04-07
To better understand the distribution of remaining lingering subsurface oil residues from the Exxon Valdez oil spill (EVOS) along the shorelines of Prince William Sound (PWS), AK, we revised previous modeling efforts to allow spatially explicit predictions of the distribution of subsurface oil. We used a set of pooled field data and predictor variables stored as Geographic Information Systems (GIS) data to generate calibrated boosted tree models predicting the encounter probability of different categories of subsurface oil. The models demonstrated excellent predictive performance as evaluated by cross-validated performance statistics. While the average encounter probabilities at most shoreline locations are low across western PWS, clusters of shoreline locations with elevated encounter probabilities remain in the northern parts of the PWS, as well as more isolated locations. These results can be applied to estimate the location and amount of remaining oil, evaluate potential ongoing impacts, and guide remediation. This is the first application of quantitative machine-learning based modeling techniques in estimating the likelihood of ongoing, long-term shoreline oil persistence after a major oil spill.
Modelling detection probabilities to evaluate management and control tools for an invasive species
Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.
2010-01-01
For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By
DEFF Research Database (Denmark)
Asmussen, Søren; Albrecher, Hansjörg
The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....
Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.
2009-01-01
Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.
Directory of Open Access Journals (Sweden)
Farnoosh Basaligheh
2015-12-01
Full Text Available One of the conventional methods for temporary support of tunnels is to use steel sets with shotcrete. The nature of a temporary support system demands a quick installation of its structures. As a result, the spacing between steel sets is not a fixed amount and it can be considered as a random variable. Hence, in the reliability analysis of these types of structures, the selection of an appropriate probability distribution function of spacing of steel sets is essential. In the present paper, the distances between steel sets are collected from an under-construction tunnel and the collected data is used to suggest a proper Probability Distribution Function (PDF for the spacing of steel sets. The tunnel has two different excavation sections. In this regard, different distribution functions were investigated and three common tests of goodness of fit were used for evaluation of each function for each excavation section. Results from all three methods indicate that the Wakeby distribution function can be suggested as the proper PDF for spacing between the steel sets. It is also noted that, although the probability distribution function for two different tunnel sections is the same, the parameters of PDF for the individual sections are different from each other.
Blind Students' Learning of Probability through the Use of a Tactile Model
Vita, Aida Carvalho; Kataoka, Verônica Yumi
2014-01-01
The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…
Interpreting and Understanding Logits, Probits, and other Non-Linear Probability Models
DEFF Research Database (Denmark)
Breen, Richard; Karlson, Kristian Bernt; Holm, Anders
2018-01-01
guidelines take little or no account of a body of work that, over the past 30 years, has pointed to problematic aspects of these nonlinear probability models and, particularly, to difficulties in interpreting their parameters. In this chapterreview, we draw on that literature to explain the problems, show...
DEFF Research Database (Denmark)
Falk, Anne Katrine Vinther; Gryning, Sven-Erik
1997-01-01
In this model for atmospheric dispersion particles are simulated by the Langevin Equation, which is a stochastic differential equation. It uses the probability density function (PDF) of the vertical velocity fluctuations as input. The PDF is constructed as an expansion after Hermite polynomials. ...
Modelling soft error probability in fi rmware: A case study | Kourie ...
African Journals Online (AJOL)
This case study involves an analysis of firmware that controls explosions in mining operations. The purpose is to estimate the probability that external disruptive events (such as electro-magnetic interference) could drive the firmware into a state which results in an unintended explosion. Two probabilistic models are built, ...
Exact probability distribution function for multifractal random walk models of stocks
Saakian, D. B.; Martirosyan, A.; Hu, Chin-Kun; Struzik, Z. R.
2011-07-01
We investigate the multifractal random walk (MRW) model, popular in the modelling of stock fluctuations in the financial market. The exact probability distribution function (PDF) is derived by employing methods proposed in the derivation of correlation functions in string theory, including the analytical extension of Selberg integrals. We show that the recent results by Y. V. Fyodorov, P. Le Doussal and A. Rosso obtained with the logarithmic Random Energy Model (REM) model are sufficient to derive exact formulas for the PDF of the log returns in the MRW model.
Directory of Open Access Journals (Sweden)
Olivera Blagojevic Popovic
2018-03-01
Full Text Available In the hotel industry, it is a well-known fact that, despite of quality and variety of services provided, there is a low probability that the guests will return. This research is focused on identifying the basic factors of the hotel offer, which could determine the influence on the correlation between the guests’ satisfaction and the probability of their return. The objective of the article is to explore the relationship between the guests’ satisfaction with the quality hotel services in total (including the tourist offer of the place and the probability of his return to the same destination. The questionnaire method was applied in the survey, and the data were analysed based on factor analysis. Thereafter, the model for forecasting the probability of the guests returning to the destination was established, by using the example of Montenegrin tourism. The model represents a defined framework for the guest’s decision-making process. It identifies two main characteristics of guest experiences: satisfaction and rated quality (of the destination’s overall hotel service and tourist offer. The same model evaluates the impact of the above factors on the probability of the guests’ returning to the same destination. The starting hypothesis was the existence of a high degree of correlation between the guests’ satisfaction (with the destination’s hotel services and tourist offer and the probability of returning to the selected Montenegrin destinations. The research confirmed the above-mentioned hypothesis. The results have revealed that there are significant differences in perceived quality, i.e. satisfaction between the target groups of Eastern and Western European tourists
Energy Technology Data Exchange (ETDEWEB)
Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston
2011-01-01
This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.
Goldberg, Samuel
1960-01-01
Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.
A stochastic-bayesian model for the fracture probability of PWR pressure vessels
International Nuclear Information System (INIS)
Francisco, Alexandre S.; Duran, Jorge Alberto R.
2013-01-01
Fracture probability of pressure vessels containing cracks can be obtained by methodologies of easy understanding, which require a deterministic treatment, complemented by statistical methods. However, more accurate results are required, methodologies need to be better formulated. This paper presents a new methodology to address this problem. First, a more rigorous methodology is obtained by means of the relationship of probability distributions that model crack incidence and nondestructive inspection efficiency using the Bayes' theorem. The result is an updated crack incidence distribution. Further, the accuracy of the methodology is improved by using a stochastic model for the crack growth. The stochastic model incorporates the statistical variability of the crack growth process, combining the stochastic theory with experimental data. Stochastic differential equations are derived by the randomization of empirical equations. From the solution of this equation, a distribution function related to the crack growth is derived. The fracture probability using both probability distribution functions is in agreement with theory, and presents realistic value for pressure vessels. (author)
Austin, Samuel H.; Nelms, David L.
2017-01-01
Climate change raises concern that risks of hydrological drought may be increasing. We estimate hydrological drought probabilities for rivers and streams in the United States (U.S.) using maximum likelihood logistic regression (MLLR). Streamflow data from winter months are used to estimate the chance of hydrological drought during summer months. Daily streamflow data collected from 9,144 stream gages from January 1, 1884 through January 9, 2014 provide hydrological drought streamflow probabilities for July, August, and September as functions of streamflows during October, November, December, January, and February, estimating outcomes 5-11 months ahead of their occurrence. Few drought prediction methods exploit temporal links among streamflows. We find MLLR modeling of drought streamflow probabilities exploits the explanatory power of temporally linked water flows. MLLR models with strong correct classification rates were produced for streams throughout the U.S. One ad hoc test of correct prediction rates of September 2013 hydrological droughts exceeded 90% correct classification. Some of the best-performing models coincide with areas of high concern including the West, the Midwest, Texas, the Southeast, and the Mid-Atlantic. Using hydrological drought MLLR probability estimates in a water management context can inform understanding of drought streamflow conditions, provide warning of future drought conditions, and aid water management decision making.
Models for setting ATM parameter values
DEFF Research Database (Denmark)
Blaabjerg, Søren; Gravey, A.; Romæuf, L.
1996-01-01
presents approximate methods and discusses their applicability. We then discuss the problem of obtaining traffic characteristic values for a connection that has crossed a series of switching nodes. This problem is particularly relevant for the traffic contract components corresponding to ICIs...... (CDV) tolerance(s). The values taken by these traffic parameters characterize the so-called ''Worst Case Traffic'' that is used by CAC procedures for accepting a new connection and allocating resources to it. Conformance to the negotiated traffic characteristics is defined, at the ingress User...... essential to set traffic characteristic values that are relevant to the considered cell stream, and that ensure that the amount of non-conforming traffic is small. Using a queueing model representation for the GCRA formalism, several methods are available for choosing the traffic characteristics. This paper...
Akbari, Hamed; Fei, Baowei
2012-02-01
Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.
Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis
Energy Technology Data Exchange (ETDEWEB)
Ferson, Scott [Applied Biomathematics, Setauket, NY (United States); Nelsen, Roger B. [Lewis & Clark College, Portland OR (United States); Hajagos, Janos [Applied Biomathematics, Setauket, NY (United States); Berleant, Daniel J. [Iowa State Univ., Ames, IA (United States); Zhang, Jianzhong [Iowa State Univ., Ames, IA (United States); Tucker, W. Troy [Applied Biomathematics, Setauket, NY (United States); Ginzburg, Lev R. [Applied Biomathematics, Setauket, NY (United States); Oberkampf, William L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-05-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
MEASURING MODEL FOR BAD LOANS IN BANKS. THE DEFAULT PROBABILITY MODEL.
Directory of Open Access Journals (Sweden)
SOCOL ADELA
2010-12-01
Full Text Available The banking sectors of the transition countries have progressed remarkably in the last 20 years. In fact, banking in most transition countries has largely shaken off the traumas of the transition eraAt the start of the 21st century banks in these countries look very much like banks elsewhere. That is, they are by no means problem free but they are struggling with the same issues as banks in other emerging market countries during the financial crises conditions. The institutional environment differs considerably among the countries. The goal we set with this article is to examine in terms of methodology the most important assessment criteria of a measuring model for bad loans.
Yang, Ziheng; Zhu, Tianqi
2018-02-20
The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.
A multi-state model for wind farms considering operational outage probability
DEFF Research Database (Denmark)
Cheng, Lin; Liu, Manjun; Sun, Yuanzhang
2013-01-01
developed by considering the following factors: running time, operating environment, operating conditions, and wind speed fluctuations. A multi-state model for wind farms is also established. Numerical results illustrate that the proposed model can be well applied to power system reliability assessment...... power penetration levels. Therefore, a more comprehensive analysis toward WECS as well as an appropriate reliability assessment model are essential for maintaining the reliable operation of power systems. In this paper, the impact of wind turbine outage probability on system reliability is firstly...
DEFF Research Database (Denmark)
Luo, Yangjun; Wu, Xiaoxiang; Zhou, Mingdong
2015-01-01
on a probability-interval mixed reliability model, the imprecision of design parameters is modeled as interval uncertainties fluctuating within allowable tolerance bounds. The optimization model is defined as to minimize the total manufacturing cost under mixed reliability index constraints, which are further...... transformed into their equivalent formulations by using the performance measure approach. The optimization problem is then solved with the sequential approximate programming. Meanwhile, a numerically stable algorithm based on the trust region method is proposed to efficiently update the target performance...
Fishnet model for failure probability tail of nacre-like imbricated lamellar materials
Luo, Wen; Bažant, Zdeněk P.
2017-12-01
Nacre, the iridescent material of the shells of pearl oysters and abalone, consists mostly of aragonite (a form of CaCO3), a brittle constituent of relatively low strength (≈10 MPa). Yet it has astonishing mean tensile strength (≈150 MPa) and fracture energy (≈350 to 1,240 J/m2). The reasons have recently become well understood: (i) the nanoscale thickness (≈300 nm) of nacre's building blocks, the aragonite lamellae (or platelets), and (ii) the imbricated, or staggered, arrangement of these lamellea, bound by biopolymer layers only ≈25 nm thick, occupying engineering applications, however, the failure probability of ≤10‑6 is generally required. To guarantee it, the type of probability density function (pdf) of strength, including its tail, must be determined. This objective, not pursued previously, is hardly achievable by experiments alone, since >10^8 tests of specimens would be needed. Here we outline a statistical model of strength that resembles a fishnet pulled diagonally, captures the tail of pdf of strength and, importantly, allows analytical safety assessments of nacreous materials. The analysis shows that, in terms of safety, the imbricated lamellar structure provides a major additional advantage—˜10% strength increase at tail failure probability 10^‑6 and a 1 to 2 orders of magnitude tail probability decrease at fixed stress. Another advantage is that a high scatter of microstructure properties diminishes the strength difference between the mean and the probability tail, compared with the weakest link model. These advantages of nacre-like materials are here justified analytically and supported by millions of Monte Carlo simulations.
International Nuclear Information System (INIS)
Casas Galiano, G.; Grau Malonda, A.
1994-01-01
An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electron-capture in the counting efficiency when the atomic number of the nuclide is high
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. © 2012 Society for Risk Analysis.
Directory of Open Access Journals (Sweden)
Onur Satir
2016-09-01
Full Text Available Forest fires are one of the most important factors in environmental risk assessment and it is the main cause of forest destruction in the Mediterranean region. Forestlands have a number of known benefits such as decreasing soil erosion, containing wild life habitats, etc. Additionally, forests are also important player in carbon cycle and decreasing the climate change impacts. This paper discusses forest fire probability mapping of a Mediterranean forestland using a multiple data assessment technique. An artificial neural network (ANN method was used to map forest fire probability in Upper Seyhan Basin (USB in Turkey. Multi-layer perceptron (MLP approach based on back propagation algorithm was applied in respect to physical, anthropogenic, climate and fire occurrence datasets. Result was validated using relative operating characteristic (ROC analysis. Coefficient of accuracy of the MLP was 0.83. Landscape features input to the model were assessed statistically to identify the most descriptive factors on forest fire probability mapping using the Pearson correlation coefficient. Landscape features like elevation (R = −0.43, tree cover (R = 0.93 and temperature (R = 0.42 were strongly correlated with forest fire probability in the USB region.
International Nuclear Information System (INIS)
Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t
2012-01-01
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Energy Technology Data Exchange (ETDEWEB)
Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Directory of Open Access Journals (Sweden)
Mitkovska-Trendova Katerina
2014-01-01
Full Text Available The main goal of the presented research is to define a methodology for determination of the transition probabilities in a Markov Decision Process on the example of optimization of the quality accuracy through optimization of its main measure (percent of scrap in a Performance Measurement System. This research had two main driving forces. First, today's urge for introduction of more robust, mathematically founded methods/tools in different enterprise areas, including PMSs. Second, since Markov Decision Processes are chosen as such tool, certain shortcomings of this approach had to be handled. Exactly the calculation of the transition probabilities is one of the weak points of the Markov Decision Processes. The proposed methodology for calculation of the transition probabilities is based on utilization of recorded historical data and they are calculated for each possible transition from a state after one run to a state after the following run of the influential factor (e.g. machine. The methodology encompasses several steps that include: collecting different data connected to the percent of scrap and their processing according to the needs of the methodology, determination of the limits of the states for every influential factor, classification of the data from real batches according to the determined states and calculation of the transition probabilities from one state to another state for every action. However, the implementation of the Markov Decision Process model with the proposed methodology for calculation of the transition probabilities resulted in optimal policy that showed significant differences in the percent of scrap, compared to the real situation when the optimization of the percent of scrap was done heuristically (5.2107% versus 13.5928%.
The Tdgl Equation for Car-Following Model with Consideration of the Traffic Interruption Probability
Ge, Hong-Xia; Zhang, Yi-Qiang; Kuang, Hua; Lo, Siu-Ming
2012-07-01
A car-following model which involves the effects of traffic interruption probability is further investigated. The stability condition of the model is obtained through the linear stability analysis. The reductive perturbation method is taken to derive the time-dependent Ginzburg-Landau (TDGL) equation to describe the traffic flow near the critical point. Moreover, the coexisting curve and the spinodal line are obtained by the first and second derivatives of the thermodynamic potential, respectively. The analytical results show that considering the interruption effects could further stabilize traffic flow.
Probability of detection model for the non-destructive inspection of steam generator tubes of PWRs
Yusa, N.
2017-06-01
This study proposes a probability of detection (POD) model to discuss the capability of non-destructive testing methods for the detection of stress corrosion cracks appearing in the steam generator tubes of pressurized water reactors. Three-dimensional finite element simulations were conducted to evaluate eddy current signals due to stress corrosion cracks. The simulations consider an absolute type pancake probe and model a stress corrosion crack as a region with a certain electrical conductivity inside to account for eddy currents flowing across a flaw. The probabilistic nature of a non-destructive test is simulated by varying the electrical conductivity of the modelled stress corrosion cracking. A two-dimensional POD model, which provides the POD as a function of the depth and length of a flaw, is presented together with a conventional POD model characterizing a flaw using a single parameter. The effect of the number of the samples on the PODs is also discussed.
A cellular automata model of traffic flow with variable probability of randomization
International Nuclear Information System (INIS)
Zheng Wei-Fan; Zhang Ji-Ye
2015-01-01
Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow–density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. (paper)
Stirk, Emily R; Lythe, Grant; van den Berg, Hugo A; Hurst, Gareth A D; Molina-París, Carmen
2010-04-01
The limiting conditional probability distribution (LCD) has been much studied in the field of mathematical biology, particularly in the context of epidemiology and the persistence of epidemics. However, it has not yet been applied to the immune system. One of the characteristic features of the T cell repertoire is its diversity. This diversity declines in old age, whence the concepts of extinction and persistence are also relevant to the immune system. In this paper we model T cell repertoire maintenance by means of a continuous-time birth and death process on the positive integers, where the origin is an absorbing state. We show that eventual extinction is guaranteed. The late-time behaviour of the process before extinction takes place is modelled by the LCD, which we prove always exists for the process studied here. In most cases, analytic expressions for the LCD cannot be computed but the probability distribution may be approximated by means of the stationary probability distributions of two related processes. We show how these approximations are related to the LCD of the original process and use them to study the LCD in two special cases. We also make use of the large N expansion to derive a further approximation to the LCD. The accuracy of the various approximations is then analysed. (c) 2009 Elsevier Inc. All rights reserved.
A Novel Probability Model for Suppressing Multipath Ghosts in GPR and TWI Imaging: A Numerical Study
Directory of Open Access Journals (Sweden)
Tan Yun-hua
2015-10-01
Full Text Available A novel concept for suppressing the problem of multipath ghosts in Ground Penetrating Radar (GPR and Through-Wall Imaging (TWI is presented. Ghosts (i.e., false targets mainly arise from the use of the Born or single-scattering approximations that lead to linearized imaging algorithms; however, these approximations neglect the effect of multiple scattering (or multipath between the electromagnetic wavefield and the object under investigation. In contrast to existing methods of suppressing multipath ghosts, the proposed method models for the first time the reflectivity of the probed objects as a probability function up to a normalized factor and introduces the concept of random subaperture by randomly picking up measurement locations from the entire aperture. Thus, the final radar image is a joint probability distribution that corresponds to radar images derived from multiple random subapertures. Finally, numerical experiments are used to demonstrate the performance of the proposed methodology in GPR and TWI imaging.
Modelling the impact of creep on the probability of failure of a solid oxidefuel cell stack
DEFF Research Database (Denmark)
Greco, Fabio; Frandsen, Henrik Lund; Nakajo, Arata
2014-01-01
In solid oxide fuel cell (SOFC) technology a major challenge lies in balancing thermal stresses from an inevitable thermal field. The cells are known to creep, changing over time the stress field. The main objective of this study was to assess the influence of creep on the failure probability...... of an SOFC stack. A finite element analysis on a single repeating unit of the stack was performed, in which the influence of the mechanical interactions,the temperature-dependent mechanical properties and creep of the SOFC materials are considered. Moreover, stresses from the thermo-mechanical simulation...... of sintering of the cells have been obtained and were implemented into the model of the single repeating unit. The significance of the relaxation of the stresses by creep in the cell components and its influence on the probability of cell survival was investigated. Finally, the influence of cell size...
International Nuclear Information System (INIS)
Halliwell, J. J.
2009-01-01
In the quantization of simple cosmological models (minisuperspace models) described by the Wheeler-DeWitt equation, an important step is the construction, from the wave function, of a probability distribution answering various questions of physical interest, such as the probability of the system entering a given region of configuration space at any stage in its entire history. A standard but heuristic procedure is to use the flux of (components of) the wave function in a WKB approximation. This gives sensible semiclassical results but lacks an underlying operator formalism. In this paper, we address the issue of constructing probability distributions linked to the Wheeler-DeWitt equation using the decoherent histories approach to quantum theory. The key step is the construction of class operators characterizing questions of physical interest. Taking advantage of a recent decoherent histories analysis of the arrival time problem in nonrelativistic quantum mechanics, we show that the appropriate class operators in quantum cosmology are readily constructed using a complex potential. The class operator for not entering a region of configuration space is given by the S matrix for scattering off a complex potential localized in that region. We thus derive the class operators for entering one or more regions in configuration space. The class operators commute with the Hamiltonian, have a sensible classical limit, and are closely related to an intersection number operator. The definitions of class operators given here handle the key case in which the underlying classical system has multiple crossings of the boundaries of the regions of interest. We show that oscillatory WKB solutions to the Wheeler-DeWitt equation give approximate decoherence of histories, as do superpositions of WKB solutions, as long as the regions of configuration space are sufficiently large. The corresponding probabilities coincide, in a semiclassical approximation, with standard heuristic procedures
Various models for pion probability distributions from heavy-ion collisions
International Nuclear Information System (INIS)
Mekjian, A.Z.; Mekjian, A.Z.; Schlei, B.R.; Strottman, D.; Schlei, B.R.
1998-01-01
Various models for pion multiplicity distributions produced in relativistic heavy ion collisions are discussed. The models include a relativistic hydrodynamic model, a thermodynamic description, an emitting source pion laser model, and a description which generates a negative binomial description. The approach developed can be used to discuss other cases which will be mentioned. The pion probability distributions for these various cases are compared. Comparison of the pion laser model and Bose-Einstein condensation in a laser trap and with the thermal model are made. The thermal model and hydrodynamic model are also used to illustrate why the number of pions never diverges and why the Bose-Einstein correction effects are relatively small. The pion emission strength η of a Poisson emitter and a critical density η c are connected in a thermal model by η/n c =e -m/T <1, and this fact reduces any Bose-Einstein correction effects in the number and number fluctuation of pions. Fluctuations can be much larger than Poisson in the pion laser model and for a negative binomial description. The clan representation of the negative binomial distribution due to Van Hove and Giovannini is discussed using the present description. Applications to CERN/NA44 and CERN/NA49 data are discussed in terms of the relativistic hydrodynamic model. copyright 1998 The American Physical Society
Benchmark data set for wheat growth models
DEFF Research Database (Denmark)
Asseng, S; Ewert, F.; Martre, P
2015-01-01
The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation...
Finite element model updating of concrete structures based on imprecise probability
Biswal, S.; Ramaswamy, A.
2017-09-01
Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.
International Nuclear Information System (INIS)
Vinogradov, S.
2012-01-01
Silicon Photomultipliers (SiPM), also called Solid State Photomultipliers (SSPM), are based on Geiger mode avalanche breakdown that is limited by a strong negative feedback. An SSPM can detect and resolve single photons due to the high gain and ultra-low excess noise of avalanche multiplication in this mode. Crosstalk and afterpulsing processes associated with the high gain introduce specific excess noise and deteriorate the photon number resolution of the SSPM. The probabilistic features of these processes are widely studied because of its significance for the SSPM design, characterization, optimization and application, but the process modeling is mostly based on Monte Carlo simulations and numerical methods. In this study, crosstalk is considered to be a branching Poisson process, and analytical models of probability distribution and excess noise factor (ENF) of SSPM signals based on the Borel distribution as an advance on the geometric distribution models are presented and discussed. The models are found to be in a good agreement with the experimental probability distributions for dark counts and a few photon spectrums in a wide range of fired pixels number as well as with observed super-linear behavior of crosstalk ENF.
Empirical probability model of cold plasma environment in the Jovian magnetosphere
Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete
2015-04-01
We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.
Fast Outage Probability Simulation for FSO Links with a Generalized Pointing Error Model
Ben Issaid, Chaouki
2017-02-07
Over the past few years, free-space optical (FSO) communication has gained significant attention. In fact, FSO can provide cost-effective and unlicensed links, with high-bandwidth capacity and low error rate, making it an exciting alternative to traditional wireless radio-frequency communication systems. However, the system performance is affected not only by the presence of atmospheric turbulences, which occur due to random fluctuations in the air refractive index but also by the existence of pointing errors. Metrics, such as the outage probability which quantifies the probability that the instantaneous signal-to-noise ratio is smaller than a given threshold, can be used to analyze the performance of this system. In this work, we consider weak and strong turbulence regimes, and we study the outage probability of an FSO communication system under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results.
International Nuclear Information System (INIS)
Koshinchanov, Georgy; Dimitrov, Dobri
2008-01-01
The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: - The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; - As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. - Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; - Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; - Next method is considering only
Kazak, Sibel; Pratt, Dave
2017-01-01
This study considers probability models as tools for both making informal statistical inferences and building stronger conceptual connections between data and chance topics in teaching statistics. In this paper, we aim to explore pre-service mathematics teachers' use of probability models for a chance game, where the sum of two dice matters in…
[Estimation of transition probability in diameter grade transition model of forest population].
Qu, Zhilin; Hu, Haiqing
2006-12-01
Based on the theories of statistical analysis and differential equation, two methods were given for estimating the transition probability in the diameter grade transition model of forest population. The first method was used for the estimation when two groups of observation data were given and it was no necessary to consider the environmental factors of forest stand, while the second one was used for that when the environmental factors were known and there was no need of the observation data. The results of case studies showed that both of the two methods had the characteristics of easy operation and high practicability, and the importance of theoretical guidance and practical application in forest management.
International Nuclear Information System (INIS)
Galiano, G.; Grau, A.
1994-01-01
An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electro capture in the counting efficiency when the atomic number of the nuclide is high. (Author)
2016-12-01
and identifying sources of smuggled nuclear material; however, it may also be used to determine a material’s origin in analysis of post detonation...RIMS analysis . Within this equation from [10], the desired cross section for ionization is contained. 21 U ion A ex N e N σ ω − = − 18... analysis : 21 U ion A ex N e N σ ω − = − After the curve fitting was complete, the ionization probability model was executed and the results
Energy Technology Data Exchange (ETDEWEB)
Duffy, Stephen [Cleveland State Univ., Cleveland, OH (United States)
2013-09-09
This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.
International Nuclear Information System (INIS)
Duffy, Stephen
2013-01-01
This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.
Setting Parameters for Biological Models With ANIMO
Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran
2014-01-01
ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions
Li, Qi-Lang; Wong, S. C.; Min, Jie; Tian, Shuo; Wang, Bing-Hong
2016-08-01
This study examines the cellular automata traffic flow model, which considers the heterogeneity of vehicle acceleration and the delay probability of vehicles. Computer simulations are used to identify three typical phases in the model: free-flow, synchronized flow, and wide moving traffic jam. In the synchronized flow region of the fundamental diagram, the low and high velocity vehicles compete with each other and play an important role in the evolution of the system. The analysis shows that there are two types of bistable phases. However, in the original Nagel and Schreckenberg cellular automata traffic model, there are only two kinds of traffic conditions, namely, free-flow and traffic jams. The synchronized flow phase and bistable phase have not been found.
Modelling occupants’ heating set-point prefferences
DEFF Research Database (Denmark)
Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn
2011-01-01
Discrepancies between simulated and actual occupant behaviour can offset the actual energy consumption by several orders of magnitude compared to simulation results. Thus, there is a need to set up guidelines to increase the reliability of forecasts of environmental conditions and energy consumpt...
Directory of Open Access Journals (Sweden)
WenJun Zhang
2016-12-01
Full Text Available In most of the link prediction methods, all predicted missing links are ranked according to their scores. In the practical application of prediction results, starting from the first link that has the highest score in the ranking list, we verify each link one by one through experiments or other ways. Nevertheless, how to find an occurrence pattern of true missing links in the ranking list has seldomly reported. In present study, I proposed a mathematical model for relationship between cumulative number of predicted true missing links (y and cumulative number of predicted missing links (x: y=K(1-e^(-rx/K, where K is the expected total number of true missing links, and r is the intrinsic (maximum occurrence probability of true missing links. It can be used to predict the changes of occurrence probability of true missing links, assess the effectiveness of a prediction method, and help find the mechanism of link missing in the network. The model was validated by six prediction methods using the data of tumor pathways.
Serfling, Robert; Ogola, Gerald
2016-02-10
Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Mu, Chun-sun; Zhang, Ping; Kong, Chun-yan; Li, Yang-ning
2015-09-01
To study the application of Bayes probability model in differentiating yin and yang jaundice syndromes in neonates. Totally 107 jaundice neonates who admitted to hospital within 10 days after birth were assigned to two groups according to syndrome differentiation, 68 in the yang jaundice syndrome group and 39 in the yin jaundice syndrome group. Data collected for neonates were factors related to jaundice before, during and after birth. Blood routines, liver and renal functions, and myocardial enzymes were tested on the admission day or the next day. Logistic regression model and Bayes discriminating analysis were used to screen factors important for yin and yang jaundice syndrome differentiation. Finally, Bayes probability model for yin and yang jaundice syndromes was established and assessed. Factors important for yin and yang jaundice syndrome differentiation screened by Logistic regression model and Bayes discriminating analysis included mothers' age, mother with gestational diabetes mellitus (GDM), gestational age, asphyxia, or ABO hemolytic diseases, red blood cell distribution width (RDW-SD), platelet-large cell ratio (P-LCR), serum direct bilirubin (DBIL), alkaline phosphatase (ALP), cholinesterase (CHE). Bayes discriminating analysis was performed by SPSS to obtain Bayes discriminant function coefficient. Bayes discriminant function was established according to discriminant function coefficients. Yang jaundice syndrome: y1= -21. 701 +2. 589 x mother's age + 1. 037 x GDM-17. 175 x asphyxia + 13. 876 x gestational age + 6. 303 x ABO hemolytic disease + 2.116 x RDW-SD + 0. 831 x DBIL + 0. 012 x ALP + 1. 697 x LCR + 0. 001 x CHE; Yin jaundice syndrome: y2= -33. 511 + 2.991 x mother's age + 3.960 x GDM-12. 877 x asphyxia + 11. 848 x gestational age + 1. 820 x ABO hemolytic disease +2. 231 x RDW-SD +0. 999 x DBIL +0. 023 x ALP +1. 916 x LCR +0. 002 x CHE. Bayes discriminant function was hypothesis tested and got Wilks' λ =0. 393 (P =0. 000). So Bayes
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-01
Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the "five Rs" (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider "stem-like cancer cells" (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. In sample calculations with linear quadratic parameters α = 0.3 per Gy, α∕β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are the ones appropriate for individualized
International Nuclear Information System (INIS)
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-01-01
Purpose: Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the “five Rs” (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider “stem-like cancer cells” (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. Methods: The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. Results: In sample calculations with linear quadratic parameters α = 0.3 per Gy, α/β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. Conclusions: The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are
process setting models for the minimization of costs defectives
African Journals Online (AJOL)
Dr Obe
2. Optimal Setting Process Models. 2.1 Optimal setting of process mean in the case of one-sided limit. In filling operation, the process average net weight must be set. The standards prescribe the minimum weight which is printed on the packet. This set of quality control problems has one-sided limit (the minimum net weight).
DEFF Research Database (Denmark)
Lühr, Armin; Löck, Steffen; Jakobi, Annika
2017-01-01
PURPOSE: Objectives of this work are (1) to derive a general clinically relevant approach to model tumor control probability (TCP) for spatially variable risk of failure and (2) to demonstrate its applicability by estimating TCP for patients planned for photon and proton irradiation. METHODS...... AND MATERIALS: The approach divides the target volume into sub-volumes according to retrospectively observed spatial failure patterns. The product of all sub-volume TCPi values reproduces the observed TCP for the total tumor. The derived formalism provides for each target sub-volume i the tumor control dose (D...... clinical target volume (CTV), and elective CTV (CTVE). The risk of a local failure in each of these sub-volumes was taken from the literature. RESULTS: Convenient expressions for D50,i and γ50,i were provided for the Poisson and the logistic model. Comparable TCP estimates were obtained for photon...
International Nuclear Information System (INIS)
Partridge, Mike; Tree, Alison; Brock, Juliet; McNair, Helen; Fernandez, Elizabeth; Panakis, Niki; Brada, Michael
2009-01-01
Introduction: The prognosis from non-small cell lung cancer remains poor, even in those patients suitable for radical radiotherapy. The ability of radiotherapy to achieve local control is hampered by the sensitivity of normal structures to irradiation at the high tumour doses needed. This study aimed to look at the potential gain in tumour control probability from dose escalation facilitated by moderate deep inspiration breath-hold. Method: The data from 28 patients, recruited into two separate studies were used. These patients underwent planning with and without the use of moderate deep inspiration breath-hold with an active breathing control (ABC) device. Whilst maintaining the mean lung dose (MLD) at the level of the conventional plan, the ABC plan dose was theoretically escalated to a maximum of 84 Gy, constrained by usual normal tissue tolerances. Calculations were performed using data for both lungs and for the ipsilateral lung only. Resulting local progression-free survival at 30 months was calculated using a standard logistic model. Results: The prescription dose could be escalated from 64 Gy to a mean of 73.7 ± 6.5 Gy without margin reduction, which represents a statistically significant increase in tumour control probability from 0.15 ± 0.01 to 0.29 ± 0.11 (p < 0.0001). The results were not statistically different whether both lungs or just the ipsilateral lung was used for calculations. Conclusion: A near-doubling of tumour control probability is possible with modest dose escalation, which can be achieved with no extra increase in lung dose if deep inspiration breath-hold techniques are used.
Directory of Open Access Journals (Sweden)
Douglas C. Tozer
2016-12-01
Full Text Available Marsh birds are notoriously elusive, with variation in detection probability across species, regions, seasons, and different times of day and weather. Therefore, it is important to develop regional field survey protocols that maximize detections, but that also produce data for estimating and analytically adjusting for remaining differences in detections. We aimed to improve regional field survey protocols by estimating detection probability of eight elusive marsh bird species throughout two regions that have ongoing marsh bird monitoring programs: the southern Canadian Prairies (Prairie region and the southern portion of the Great Lakes basin and parts of southern Québec (Great Lakes-St. Lawrence region. We accomplished our goal using generalized binomial N-mixture models and data from ~22,300 marsh bird surveys conducted between 2008 and 2014 by Bird Studies Canada's Prairie, Great Lakes, and Québec Marsh Monitoring Programs. Across all species, on average, detection probability was highest in the Great Lakes-St. Lawrence region from the beginning of May until mid-June, and then fell throughout the remainder of the season until the end of June; was lowest in the Prairie region in mid-May and then increased throughout the remainder of the season until the end of June; was highest during darkness compared with light; and did not vary significantly according to temperature (range: 0-30°C, cloud cover (0%-100%, or wind (0-20 kph, or during morning versus evening. We used our results to formulate improved marsh bird survey protocols for each region. Our analysis and recommendations are useful and contribute to conservation of wetland birds at various scales from local single-species studies to the continental North American Marsh Bird Monitoring Program.
Directory of Open Access Journals (Sweden)
Nils Ternès
2017-05-01
Full Text Available Abstract Background Thanks to the advances in genomics and targeted treatments, more and more prediction models based on biomarkers are being developed to predict potential benefit from treatments in a randomized clinical trial. Despite the methodological framework for the development and validation of prediction models in a high-dimensional setting is getting more and more established, no clear guidance exists yet on how to estimate expected survival probabilities in a penalized model with biomarker-by-treatment interactions. Methods Based on a parsimonious biomarker selection in a penalized high-dimensional Cox model (lasso or adaptive lasso, we propose a unified framework to: estimate internally the predictive accuracy metrics of the developed model (using double cross-validation; estimate the individual survival probabilities at a given timepoint; construct confidence intervals thereof (analytical or bootstrap; and visualize them graphically (pointwise or smoothed with spline. We compared these strategies through a simulation study covering scenarios with or without biomarker effects. We applied the strategies to a large randomized phase III clinical trial that evaluated the effect of adding trastuzumab to chemotherapy in 1574 early breast cancer patients, for which the expression of 462 genes was measured. Results In our simulations, penalized regression models using the adaptive lasso estimated the survival probability of new patients with low bias and standard error; bootstrapped confidence intervals had empirical coverage probability close to the nominal level across very different scenarios. The double cross-validation performed on the training data set closely mimicked the predictive accuracy of the selected models in external validation data. We also propose a useful visual representation of the expected survival probabilities using splines. In the breast cancer trial, the adaptive lasso penalty selected a prediction model with 4
Annotation-based feature extraction from sets of SBML models.
Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron
2015-01-01
Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.
Tomás, I; Arias-Bujanda, N; Alonso-Sampedro, M; Casares-de-Cal, M A; Sánchez-Sellero, C; Suárez-Quintanilla, D; Balsa-Castro, C
2017-09-14
Although a distinct cytokine profile has been described in the gingival crevicular fluid (GCF) of patients with chronic periodontitis, there is no evidence of GCF cytokine-based predictive models being used to diagnose the disease. Our objectives were: to obtain GCF cytokine-based predictive models; and develop nomograms derived from them. A sample of 150 participants was recruited: 75 periodontally healthy controls and 75 subjects affected by chronic periodontitis. Sixteen mediators were measured in GCF using the Luminex 100™ instrument: GMCSF, IFNgamma, IL1alpha, IL1beta, IL2, IL3, IL4, IL5, IL6, IL10, IL12p40, IL12p70, IL13, IL17A, IL17F and TNFalpha. Cytokine-based models were obtained using multivariate binary logistic regression. Models were selected for their ability to predict chronic periodontitis, considering the different role of the cytokines involved in the inflammatory process. The outstanding predictive accuracy of the resulting smoking-adjusted models showed that IL1alpha, IL1beta and IL17A in GCF are very good biomarkers for distinguishing patients with chronic periodontitis from periodontally healthy individuals. The predictive ability of these pro-inflammatory cytokines was increased by incorporating IFN gamma and IL10. The nomograms revealed the amount of periodontitis-associated imbalances between these cytokines with pro-inflammatory and anti-inflammatory effects in terms of a particular probability of having chronic periodontitis.
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Presenting a Model for Setting in Narrative Fiction Illustration
Directory of Open Access Journals (Sweden)
Hajar Salimi Namin
2017-12-01
Full Text Available The present research aims at presenting a model for evaluating and enhancing training the setting in illustration for narrative fictions for undergraduate students of graphic design who are weak in setting. The research utilized expert’s opinions through a survey. The designed model was submitted to eight experts, and their opinions were used to have the model adjusted and improved. Used as research instruments were notes, materials in text books, papers, and related websites, as well as questionnaires. Results indicated that, for evaluating and enhancing the level of training the setting in illustration for narrative fiction to students, one needs to extract sub-indexes of setting. Moreover, definition and recognition of the model of setting helps undergraduate students of graphic design enhance the level of setting in their works skill by recognizing details of setting. Accordingly, it is recommended to design training packages to enhance these sub-indexes and hence improve the setting for narrative fiction illustration.
Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng
2017-08-01
Due to the maximum velocity and safe headway distance of the different vehicles are not exactly the same, an extended macro model of traffic flow with the consideration of multiple optimal velocity functions with probabilities is proposed in this paper. By means of linear stability theory, the new model's linear stability condition considering multiple probabilities optimal velocity is obtained. The KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line through nonlinear analysis. The numerical simulations of influences of multiple maximum velocities and multiple safety distances on model's stability and traffic capacity are carried out. The cases of two different kinds of maximum speeds with same safe headway distance, two different types of safe headway distances with same maximum speed and two different max velocities and two different time-gaps are all explored by numerical simulations. First cases demonstrate that when the proportion of vehicles with a larger vmax increase, the traffic tends to unstable, which also means that jerk and brakes is not conducive to traffic stability and easier to result in stop and go phenomenon. Second cases show that when the proportion of vehicles with greater safety spacing increases, the traffic tends to be unstable, which also means that too cautious assumptions or weak driving skill is not conducive to traffic stability. Last cases indicate that increase of maximum speed is not conducive to traffic stability, while reduction of the safe headway distance is conducive to traffic stability. Numerical simulation manifests that the mixed driving and traffic diversion does not have effect on the traffic capacity when traffic density is low or heavy. Numerical results also show that mixed driving should be chosen to increase the traffic capacity when the traffic density is lower, while the traffic diversion should be chosen to increase the traffic capacity when
Hybrid Compensatory-Noncompensatory Choice Sets in Semicompensatory Models
DEFF Research Database (Denmark)
Kaplan, Sigal; Bekhor, Shlomo; Shiftan, Yoram
2013-01-01
Semicompensatory models represent a choice process consisting of an elimination-based choice set formation on satisfaction of criterion thresholds and a utility-based choice. Current semicompensatory models assume a purely noncompensatory choice set formation and therefore do not support...... multinomial criteria that involve trade-offs between attributes at the choice set formation stage. This study proposes a novel behavioral paradigm consisting of a hybrid compensatory-noncompensatory choice set formation process, followed by compensatory choice. The behavioral paradigm is represented...
Liu, Feng; Tai, An; Lee, Percy; Biswas, Tithi; Ding, George X.; El Naqa, Isaam; Grimm, Jimm; Jackson, Andrew; Kong, Feng-Ming (Spring); LaCouture, Tamara; Loo, Billy; Miften, Moyed; Solberg, Timothy; Li, X Allen
2017-01-01
Purpose To analyze pooled clinical data using different radiobiological models and to understand the relationship between biologically effective dose (BED) and tumor control probability (TCP) for stereotactic body radiotherapy (SBRT) of early-stage non-small cell lung cancer (NSCLC). Method and Materials The clinical data of 1-, 2-, 3-, and 5-year actuarial or Kaplan-Meier TCP from 46 selected studies were collected for SBRT of NSCLC in the literature. The TCP data were separated for Stage T1 and T2 tumors if possible, otherwise collected for combined stages. BED was calculated at isocenters using six radiobiological models. For each model, the independent model parameters were determined from a fit to the TCP data using the least chi-square (χ2) method with either one set of parameters regardless of tumor stages or two sets for T1 and T2 tumors separately. Results The fits to the clinic data yield consistent results of large α/β ratios of about 20 Gy for all models investigated. The regrowth model that accounts for the tumor repopulation and heterogeneity leads to a better fit to the data, compared to other 5 models where the fits were indistinguishable between the models. The models based on the fitting parameters predict that the T2 tumors require about additional 1 Gy physical dose at isocenters per fraction (≤5 fractions) to achieve the optimal TCP when compared to the T1 tumors. Conclusion This systematic analysis of a large set of published clinical data using different radiobiological models shows that local TCP for SBRT of early-stage NSCLC has strong dependence on BED with large α/β ratios of about 20 Gy. The six models predict that a BED (calculated with α/β of 20) of 90 Gy is sufficient to achieve TCP ≥ 95%. Among the models considered, the regrowth model leads to a better fit to the clinical data. PMID:27871671
Joe H. Scott; Robert E. Burgan
2005-01-01
This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.
Energy Technology Data Exchange (ETDEWEB)
Kukla, G.; Gavin, J. [Columbia Univ., Palisades, NY (United States). Lamont-Doherty Geological Observatory
1994-05-01
This report was prepared at the Lamont-Doherty Geological Observatory of Columbia University at Palisades, New York, under subcontract to Pacific Northwest Laboratory it is a part of a larger project of global climate studies which supports site characterization work required for the selection of a potential high-level nuclear waste repository and forms part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work under the PASS Program is currently focusing on the proposed site at Yucca Mountain, Nevada, and is under the overall direction of the Yucca Mountain Project Office US Department of Energy, Las Vegas, Nevada. The final results of the PNL project will provide input to global atmospheric models designed to test specific climate scenarios which will be used in the site specific modeling work of others. The primary purpose of the data bases compiled and of the astronomic predictive models is to aid in the estimation of the probabilities of future climate states. The results will be used by two other teams working on the global climate study under contract to PNL. They are located at and the University of Maine in Orono, Maine, and the Applied Research Corporation in College Station, Texas. This report presents the results of the third year`s work on the global climate change models and the data bases describing past climates.
A scan statistic for continuous data based on the normal probability model
Directory of Open Access Journals (Sweden)
Huang Lan
2009-10-01
Full Text Available Abstract Temporal, spatial and space-time scan statistics are commonly used to detect and evaluate the statistical significance of temporal and/or geographical disease clusters, without any prior assumptions on the location, time period or size of those clusters. Scan statistics are mostly used for count data, such as disease incidence or mortality. Sometimes there is an interest in looking for clusters with respect to a continuous variable, such as lead levels in children or low birth weight. For such continuous data, we present a scan statistic where the likelihood is calculated using the the normal probability model. It may also be used for other distributions, while still maintaining the correct alpha level. In an application of the new method, we look for geographical clusters of low birth weight in New York City.
Improving normal tissue complication probability models: the need to adopt a "data-pooling" culture.
Deasy, Joseph O; Bentzen, Søren M; Jackson, Andrew; Ten Haken, Randall K; Yorke, Ellen D; Constine, Louis S; Sharma, Ashish; Marks, Lawrence B
2010-03-01
Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. Copyright 2010 Elsevier Inc. All rights reserved.
Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin
2017-12-01
A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.
Mathematical models of tumor growth: translating absorbed dose to tumor control probability
International Nuclear Information System (INIS)
Sgouros, G.
1996-01-01
cell loss due to irradiation, the log-kill model, therefore, predicts that incomplete treatment of a kinetically heterogeneous tumor will yield a more proliferative tumor. The probability of tumor control in such a simulation may be obtained from the nadir in tumor cell number. If the nadir is not sufficiently low to yield a high probability of tumor control, then the tumor will re-grow. Since tumors in each sub-population are assumed lost at the same rate, cells comprising the sub-population with the shortest potential doubling time will re-grow the fastest, yielding a recurrent tumor that is more proliferative. A number of assumptions and simplifications are both implicitly and explicitly made in converting absorbed dose to tumor control probability. The modeling analyses described above must, therefore, be viewed in terms of understanding and evaluating different treatment approaches with the goal of treatment optimization rather than outcome prediction
Linear positivity and virtual probability
International Nuclear Information System (INIS)
Hartle, James B.
2004-01-01
We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics
Lee, T. S.; Yoon, S.; Jeong, C.
2012-12-01
The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the
DEFF Research Database (Denmark)
Rønjom, Marianne Feen; Brink, Carsten; Bentzen, Søren
2013-01-01
To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors.......To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors....
International Nuclear Information System (INIS)
Stacey, W.M.
1992-12-01
A new computational model for neutral particle transport in the outer regions of a diverted tokamak plasma chamber is presented. The model is based on the calculation of transmission and escape probabilities using first-flight integral transport theory and the balancing of fluxes across the surfaces bounding the various regions. The geometrical complexity of the problem is included in precomputed probabilities which depend only on the mean free path of the region
DEFF Research Database (Denmark)
Casares Magaz, Oscar; Van der Heide, Uulke A; Rørvik, Jarle
2016-01-01
Background and purpose: Standard tumour control probability (TCP) models assume uniform tumour cell density across the tumour. The aim of this study was to develop an individualised TCP model by including index-tumour regions extracted form multi-parametric magnetic resonance imaging (MRI...... density: a linear, a binary and a sigmoid relation. All TCP models were based on linear- quadratic cell survival curves assuming a/b = 1.93 Gy (consistent with a recent meta-analysis) and a set to obtain a 70% of TCP when 77 Gy was delivered to the entire prostate in 35 fractions (a = 0.18 Gy?1). Results......: Overall, TCP curves based on ADC maps showed larger differences between individuals than those assuming uniform cell densities. The range of the dose required to reach 50% TCP across the patient cohort was 20.1 Gy, 18.7 Gy and 13.2 Gy using an MRI-based voxel density (linear, binary and sigmoid approach...
Constraint-based Student Modelling in Probability Story Problems with Scaffolding Techniques
Directory of Open Access Journals (Sweden)
Nabila Khodeir
2018-01-01
Full Text Available Constraint-based student modelling (CBM is an important technique employed in intelligent tutoring systems to model student knowledge to provide relevant assistance. This paper introduces the Math Story Problem Tutor (MAST, a Web-based intelligent tutoring system for probability story problems, which is able to generate problems of different contexts, types and difficulty levels for self-paced learning. Constraints in MAST are specified at a low-level of granularity to allow fine-grained diagnosis of the student error. Furthermore, MAST extends CBM to address errors due to misunderstanding of the narrative story. It can locate and highlight keywords that may have been overlooked or misunderstood leading to an error. This is achieved by utilizing the role of sentences and keywords that are defined through the Natural Language Generation (NLG methods deployed in the story problem generation. MAST also integrates CBM with scaffolding questions and feedback to provide various forms of help and guidance to the student. This allows the student to discover and correct any errors in his/her solution. MAST has been preliminary evaluated empirically and the results show the potential effectiveness in tutoring students with a decrease in the percentage of violated constraints along the learning curve. Additionally, there is a significant improvement in the results of the post–test exam in comparison to the pre-test exam of the students using MAST in comparison to those relying on the textbook
International Nuclear Information System (INIS)
Musho, M.K.; Kozak, J.J.
1984-01-01
A method is presented for calculating exactly the relative width (sigma 2 )/sup 1/2// , the skewness γ 1 , and the kurtosis γ 2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes
Directory of Open Access Journals (Sweden)
Çiğdem ÖZARİ
2018-01-01
Full Text Available In this study, we have worked on developing a brand-new index called Fuzzy-bankruptcy index. The aim of this index is to find out the default probability of any company X, independent from the sector it belongs. Fuzzy logic is used to state the financial ratiointerruption change related with time and inside different sectors, the new index is created to eliminate the number of the relativity of financial ratios. The four input variables inside the five main input variables used for the fuzzy process, are chosen from both factor analysis and clustering and the last input variable calculated from Merton Model. As we analyze in the past cases of the default history of companies, one could explore different reasons such as managerial arrogance, fraud and managerial mistakes, that are responsible for the very poor endings of prestigious companies like Enron, K-Mart. Because of these kind of situations, we try to design a model which one could be able to get a better view of a company’s financial position, and it couldbe prevent credit loan companies from investing in the wrong company and possibly from losing all investments using our Fuzzy-bankruptcy index.
Probability Modeling of Precipitation Extremes over Two River Basins in Northwest of China
Directory of Open Access Journals (Sweden)
Zhanling Li
2015-01-01
Full Text Available This paper is focused on the probability modeling with a range of distribution models over two inland river basins in China, together with the estimations of return levels on various return periods. Both annual and seasonal maximum precipitations (MP are investigated based on daily precipitation data at 13 stations from 1960 to 2010 in Heihe River and Shiyang River basins. Results show that GEV, Burr, and Weibull distributions provide the best fit to both annual and seasonal MP. Exponential and Pareto 2 distributions show the worst fit. The estimated return levels for spring MP show decreasing trends from the upper to the middle and then to the lower reaches totally speaking. Summer MP approximates to annual MP both in the quantity and in the spatial distributions. Autumn MP shows a little higher value in the estimated return levels than Spring MP, while keeping consistent with spring MP in the spatial distribution. It is also found that the estimated return levels for annual MP derived from various distributions differ by 22%, 36%, and 53% on average at 20-year, 50-year, and 100-year return periods, respectively.
A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data.
Smith, Gregory R; Birtwistle, Marc R
2016-01-01
A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes.
Mazas, Franck; Hamm, Luc; Kergadallan, Xavier
2013-04-01
In France, the storm Xynthia of February 27-28th, 2010 reminded engineers and stakeholders of the necessity for an accurate estimation of extreme sea levels for the risk assessment in coastal areas. Traditionally, two main approaches exist for the statistical extrapolation of extreme sea levels: the direct approach performs a direct extrapolation on the sea level data, while the indirect approach carries out a separate analysis of the deterministic component (astronomical tide) and stochastic component (meteorological residual, or surge). When the tidal component is large compared with the surge one, the latter approach is known to perform better. In this approach, the statistical extrapolation is performed on the surge component then the distribution of extreme seal levels is obtained by convolution of the tide and surge distributions. This model is often referred to as the Joint Probability Method. Different models from the univariate extreme theory have been applied in the past for extrapolating extreme surges, in particular the Annual Maxima Method (AMM) and the r-largest method. In this presentation, we apply the Peaks-Over-Threshold (POT) approach for declustering extreme surge events, coupled with the Poisson-GPD model for fitting extreme surge peaks. This methodology allows a sound estimation of both lower and upper tails of the stochastic distribution, including the estimation of the uncertainties associated to the fit by computing the confidence intervals. After convolution with the tide signal, the model yields the distribution for the whole range of possible sea level values. Particular attention is paid to the necessary distinction between sea level values observed at a regular time step, such as hourly, and sea level events, such as those occurring during a storm. Extremal indexes for both surges and levels are thus introduced. This methodology will be illustrated with a case study at Brest, France.
Paap, R; van Nierop, E; van Heerde, HJ; Wedel, M; Franses, PH; Alsem, KJ
2005-01-01
We present a statistical model for voter choice that incorporates a consideration set stage and final vote intention stage. The first stage involves a multivariate probit (MVP) model to describe the probabilities that a candidate or a party gets considered. The second stage of the model is a
Ruin probability with claims modeled by a stationary ergodic stable process
Mikosch, T; Samorodnitsky, G
2000-01-01
For a random walk with negative drift we study the exceedance probability (ruin probability) of a high threshold. The steps of this walk (claim sizes) constitute a stationary ergodic stable process. We study how ruin occurs in this situation and evaluate the asymptotic behavior of the ruin
Influencing the Probability for Graduation at Four-Year Institutions: A Multi-Model Analysis
Cragg, Kristina M.
2009-01-01
The purpose of this study is to identify student and institutional characteristics that influence the probability for graduation. The study delves further into the probability for graduation by examining how far the student deviates from the institutional mean with respect to academics and affordability; this concept is referred to as the "match."…
A Novel Adaptive Conditional Probability-Based Predicting Model for User’s Personality Traits
Directory of Open Access Journals (Sweden)
Mengmeng Wang
2015-01-01
Full Text Available With the pervasive increase in social media use, the explosion of users’ generated data provides a potentially very rich source of information, which plays an important role in helping online researchers understand user’s behaviors deeply. Since user’s personality traits are the driving force of user’s behaviors, hence, in this paper, along with social network features, we first extract linguistic features, emotional statistical features, and topic features from user’s Facebook status updates, followed by quantifying importance of features via Kendall correlation coefficient. And then, on the basis of weighted features and dynamic updated thresholds of personality traits, we deploy a novel adaptive conditional probability-based predicting model which considers prior knowledge of correlations between user’s personality traits to predict user’s Big Five personality traits. In the experimental work, we explore the existence of correlations between user’s personality traits which provides a better theoretical support for our proposed method. Moreover, on the same Facebook dataset, compared to other methods, our method can achieve an F1-measure of 80.6% when taking into account correlations between user’s personality traits, and there is an impressive improvement of 5.8% over other approaches.
Chen, Xi; Schäfer, Karina V. R.; Slater, Lee
2017-08-01
Ebullition can transport methane (CH4) at a much faster rate than other pathways, albeit over limited time and area, in wetland soils and sediments. However, field observations present large uncertainties in ebullition occurrences and statistic models are needed to describe the function relationship between probability of ebullition occurrence and water level changes. A flow-through chamber was designed and installed in a mudflat of an estuarine temperate marsh. Episodic increases in CH4 concentration signaling ebullition events were observed during ebbing tides (15 events over 456 ebbing tides) and occasionally during flooding tides (4 events over 455 flooding tides). Ebullition occurrence functions were defined using logistic regression as the relative initial and end water levels, as well as tidal amplitudes were found to be the key functional variables related to ebullition events. Ebullition of methane was restricted by a surface frozen layer during winter; melting of this layer during spring thaw caused increases in CH4 concentration, with ebullition fluxes similar to those associated with large fluctuations in water level around spring tides. Our findings suggest that initial and end relative water levels, in addition to tidal amplitude, partly regulate ebullition events in tidal wetlands, modulated by the lunar cycle, storage of gas bubbles at different depths and seasonal changes in the surface frozen layer. Maximum tidal strength over a few days, rather than hourly water level, may be more closely associated with the possibility of ebullition occurrence as it represents a trade-off time scale in between hourly and lunar periods.
Evolution of the Probability Measure for the Majda Model: New Invariant Measures and Breathing PDFs
Camassa, Roberto; Lin, Zhi; McLaughlin, Richard M.
2008-01-01
In 1993, Majda proposed a simple, random shear model from which scalar intermittency was rigorously predicted for the invariant probability measure of passive tracers. In this work, we present an integral formulation for the tracer measure, which leads to a new, comprehensive study on its temporal evolution based on Monte Carlo simulation and direct numerical integration. An interesting, non-monotonic "breathing" phenomenon is discovered from these results and carefully defined, with a solid example for special initial data to predict such phenomenon. The signature of this phenomenon may persist at long time, characterized by the approach of the PDF core to its infinite time, invariant value. We find that this approach may be strongly dependent on the non-dimensional Péclet number, of which the invariant measure itself is independent. Further, the "breathing" PDF is recovered as a new invariant measure in a distinguished time scale in the diffusionless limit. Rigorous asymptotic analysis is also performed to identify the Gaussian core of the invariant measures, and the critical rate at which the heavy, stretched exponential regime propagates towards the tail as a function of time is calculated.
Ogunnaike, Babatunde A; Gelmi, Claudio A; Edwards, Jeremy S
2010-05-21
Gene expression studies generate large quantities of data with the defining characteristic that the number of genes (whose expression profiles are to be determined) exceed the number of available replicates by several orders of magnitude. Standard spot-by-spot analysis still seeks to extract useful information for each gene on the basis of the number of available replicates, and thus plays to the weakness of microarrays. On the other hand, because of the data volume, treating the entire data set as an ensemble, and developing theoretical distributions for these ensembles provides a framework that plays instead to the strength of microarrays. We present theoretical results that under reasonable assumptions, the distribution of microarray intensities follows the Gamma model, with the biological interpretations of the model parameters emerging naturally. We subsequently establish that for each microarray data set, the fractional intensities can be represented as a mixture of Beta densities, and develop a procedure for using these results to draw statistical inference regarding differential gene expression. We illustrate the results with experimental data from gene expression studies on Deinococcus radiodurans following DNA damage using cDNA microarrays. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Cannon, Alex J.
2018-01-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin
Interpretations of probability
Khrennikov, Andrei
2009-01-01
This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans
2012-01-01
to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... by probability distributions is readily done by means of Monte Carlo simulation. Calculation of non-monotonic functions of possibility distributions is done within the theoretical framework of fuzzy intervals, but straight forward application of fuzzy arithmetic in general results in overestimation of interval...
A new level set model for multimaterial flows
Energy Technology Data Exchange (ETDEWEB)
Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering
2014-01-08
We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.
Data Set for Emperical Validation of Double Skin Facade Model
DEFF Research Database (Denmark)
Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per
2008-01-01
the model is empirically validated and its' limitations for the DSF modeling are identified. Correspondingly, the existence and availability of the experimental data is very essential. Three sets of accurate empirical data for validation of DSF modeling with building simulation software were produced within...
Modeling the probability distribution of positional errors incurred by residential address geocoding
Directory of Open Access Journals (Sweden)
Mazumdar Soumya
2007-01-01
Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
Fuzzy GML Modeling Based on Vague Soft Sets
Directory of Open Access Journals (Sweden)
Bo Wei
2017-01-01
Full Text Available The Open Geospatial Consortium (OGC Geography Markup Language (GML explicitly represents geographical spatial knowledge in text mode. All kinds of fuzzy problems will inevitably be encountered in spatial knowledge expression. Especially for those expressions in text mode, this fuzziness will be broader. Describing and representing fuzziness in GML seems necessary. Three kinds of fuzziness in GML can be found: element fuzziness, chain fuzziness, and attribute fuzziness. Both element fuzziness and chain fuzziness belong to the reflection of the fuzziness between GML elements and, then, the representation of chain fuzziness can be replaced by the representation of element fuzziness in GML. On the basis of vague soft set theory, two kinds of modeling, vague soft set GML Document Type Definition (DTD modeling and vague soft set GML schema modeling, are proposed for fuzzy modeling in GML DTD and GML schema, respectively. Five elements or pairs, associated with vague soft sets, are introduced. Then, the DTDs and the schemas of the five elements are correspondingly designed and presented according to their different chains and different fuzzy data types. While the introduction of the five elements or pairs is the basis of vague soft set GML modeling, the corresponding DTD and schema modifications are key for implementation of modeling. The establishment of vague soft set GML enables GML to represent fuzziness and solves the problem of lack of fuzzy information expression in GML.
Alternative probability theories for cognitive psychology.
Narens, Louis
2014-01-01
Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.
An intelligent diagnosis model based on rough set theory
Li, Ze; Huang, Hong-Xing; Zheng, Ye-Lu; Wang, Zhou-Yuan
2013-03-01
Along with the popularity of computer and rapid development of information technology, how to increase the accuracy of the agricultural diagnosis becomes a difficult problem of popularizing the agricultural expert system. Analyzing existing research, baseing on the knowledge acquisition technology of rough set theory, towards great sample data, we put forward a intelligent diagnosis model. Extract rough set decision table from the samples property, use decision table to categorize the inference relation, acquire property rules related to inference diagnosis, through the means of rough set knowledge reasoning algorithm to realize intelligent diagnosis. Finally, we validate this diagnosis model by experiments. Introduce the rough set theory to provide the agricultural expert system of great sample data a effective diagnosis model.
Robust non-rigid point set registration using student's-t mixture model.
Directory of Open Access Journals (Sweden)
Zhiyong Zhou
Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.
M. R. Ghasemi; R. Ghiasi; H. Varaee
2017-01-01
Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that ...
Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.
Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H
2016-01-01
Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.
Contributions to quantum probability
Energy Technology Data Exchange (ETDEWEB)
Fritz, Tobias
2010-06-25
Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a
Contributions to quantum probability
International Nuclear Information System (INIS)
Fritz, Tobias
2010-01-01
Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a finite set can occur as the outcome
Rasanen, Okko
2011-01-01
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this…
2013-01-01
Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that
International Nuclear Information System (INIS)
Jung, W.D.; Kim, T.W.; Park, C.K.
1991-01-01
This paper presents an integrated approach to prediction of human error probabilities with a computer program, HREP (Human Reliability Evaluation Program). HREP is developed to provide simplicity in Human Reliability Analysis (HRA) and consistency in the obtained results. The basic assumption made in developing HREP is that human behaviors can be quantified in two separate steps. One is the diagnosis error evaluation step and the other the response error evaluation step. HREP integrates the Human Cognitive Reliability (HCR) model and the HRA Event Tree technique. The former corresponds to the Diagnosis model, and the latter the Response model. HREP consists of HREP-IN and HREP-MAIN. HREP-IN is used to generate input files. HREP-MAIN is used to evaluate selected human errors in a given input file. HREP-MAIN is divided into three subsections ; the diagnosis evaluation step, the subaction evaluation step and the modification step. The final modification step takes dependency and/or recovery factors into consideration. (author)
Directory of Open Access Journals (Sweden)
Gholamreza Norouzi
2015-01-01
Full Text Available In project management context, time management is one of the most important factors affecting project success. This paper proposes a new method to solve research project scheduling problems (RPSP containing Fuzzy Graphical Evaluation and Review Technique (FGERT networks. Through the deliverables of this method, a proper estimation of project completion time (PCT and success probability can be achieved. So algorithms were developed to cover all features of the problem based on three main parameters “duration, occurrence probability, and success probability.” These developed algorithms were known as PR-FGERT (Parallel and Reversible-Fuzzy GERT networks. The main provided framework includes simplifying the network of project and taking regular steps to determine PCT and success probability. Simplifications include (1 equivalent making of parallel and series branches in fuzzy network considering the concepts of probabilistic nodes, (2 equivalent making of delay or reversible-to-itself branches and impact of changing the parameters of time and probability based on removing related branches, (3 equivalent making of simple and complex loops, and (4 an algorithm that was provided to resolve no-loop fuzzy network, after equivalent making. Finally, the performance of models was compared with existing methods. The results showed proper and real performance of models in comparison with existing methods.
Sanson, B. J.; Lijmer, J. G.; Mac Gillavry, M. R.; Turkstra, F.; Prins, M. H.; Büller, H. R.
2000-01-01
Recent studies have suggested that both the subjective judgement of a physician and standardized clinical models can be helpful in the estimation of the probability of the disease in patients with suspected pulmonary embolism (PE). We performed a multi-center study in consecutive in- and outpatients
Staphylococcus aureus is a foodborne pathogen widespread in the environment and found in various food products. This pathogen can produce enterotoxins that cause illnesses in humans. The objectives of this study were to develop a probability model of S. aureus enterotoxin production as affected by w...
Probability model of solid to liquid-like transition of a fluid suspension after a shear flow onset
Czech Academy of Sciences Publication Activity Database
Nouar, C.; Říha, Pavel
2008-01-01
Roč. 34, č. 5 (2008), s. 477-483 ISSN 0301-9322 R&D Projects: GA AV ČR IAA200600803 Institutional research plan: CEZ:AV0Z20600510 Keywords : laminar suspension flow * liquid-liquid interface * probability model Subject RIV: BK - Fluid Dynamics Impact factor: 1.497, year: 2008
DEFF Research Database (Denmark)
Rønjom, Marianne F; Brink, Carsten; Bentzen, Søren M
2015-01-01
BACKGROUND: A normal tissue complication probability (NTCP) model for radiation-induced hypothyroidism (RIHT) was previously derived in patients with squamous cell carcinoma of the head and neck (HNSCC) discerning thyroid volume (Vthyroid), mean thyroid dose (Dmean), and latency as predictive...
Path Loss, Shadow Fading, and Line-Of-Sight Probability Models for 5G Urban Macro-Cellular Scenarios
DEFF Research Database (Denmark)
Sun, Shu; Thomas, Timothy; Rappaport, Theodore S.
2015-01-01
This paper presents key parameters including the line-of-sight (LOS) probability, large-scale path loss, and shadow fading models for the design of future fifth generation (5G) wireless communication systems in urban macro-cellular (UMa) scenarios, using the data obtained from propagation...
Generalized Probability-Probability Plots
Mushkudiani, N.A.; Einmahl, J.H.J.
2004-01-01
We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P
Conceptual and Statistical Issues Regarding the Probability of Default and Modeling Default Risk
Directory of Open Access Journals (Sweden)
Emilia TITAN
2011-03-01
Full Text Available In today’s rapidly evolving financial markets, risk management offers different techniques in order to implement an efficient system against market risk. Probability of default (PD is an essential part of business intelligence and customer relation management systems in the financial institutions. Recent studies indicates that underestimating this important component, and also the loss given default (LGD, might threaten the stability and smooth running of the financial markets. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default is more valuable than the standard binary classification: credible or non credible clients. The Basle II Accord recognizes the methods of reducing credit risk and also PD and LGD as important components of advanced Internal Rating Based (IRB approach.
Probability, Nondeterminism and Concurrency
DEFF Research Database (Denmark)
Varacca, Daniele
Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particula...
International Nuclear Information System (INIS)
Pfeifle, T.W.; Mellegard, K.D.; Munson, D.E.
1992-10-01
The modified Munson-Dawson (M-D) constitutive model that describes the creep behavior of salt will be used in performance assessment calculations to assess compliance of the Waste Isolation Pilot Plant (WIPP) facility with requirements governing the disposal of nuclear waste. One of these standards requires that the uncertainty of future states of the system, material model parameters, and data be addressed in the performance assessment models. This paper presents a method in which measurement uncertainty and the inherent variability of the material are characterized by treating the M-D model parameters as random variables. The random variables can be described by appropriate probability distribution functions which then can be used in Monte Carlo or structural reliability analyses. Estimates of three random variables in the M-D model were obtained by fitting a scalar form of the model to triaxial compression creep data generated from tests of WIPP salt. Candidate probability distribution functions for each of the variables were then fitted to the estimates and their relative goodness-of-fit tested using the Kolmogorov-Smirnov statistic. A sophisticated statistical software package obtained from BMDP Statistical Software, Inc. was used in the M-D model fitting. A separate software package, STATGRAPHICS, was used in fitting the candidate probability distribution functions to estimates of the variables. Skewed distributions, i.e., lognormal and Weibull, were found to be appropriate for the random variables analyzed
Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...
Shiryaev, Albert N
2016-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.
DEFF Research Database (Denmark)
Rojas-Nandayapa, Leonardo
Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think....... By doing so, we will obtain a deeper insight into how events involving large values of sums of heavy-tailed random variables are likely to occur....
Teachers' Understandings of Probability
Liu, Yan; Thompson, Patrick
2007-01-01
Probability is an important idea with a remarkably wide range of applications. However, psychological and instructional studies conducted in the last two decades have consistently documented poor understanding of probability among different populations across different settings. The purpose of this study is to develop a theoretical framework for…
Optimisation-Based Solution Methods for Set Partitioning Models
DEFF Research Database (Denmark)
Rasmussen, Matias Sevel
The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...
Empirical validation data sets for double skin facade models
DEFF Research Database (Denmark)
Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per
2008-01-01
During recent years application of double skin facades (DSF) has greatly increased. However, successful application depends heavily on reliable and validated models for simulation of the DSF performance and this in turn requires access to high quality experimental data. Three sets of accurate...... empirical data for validation of DSF modeling with building simulation software were produced within the International Energy Agency (IEA) SHCTask 34 / ECBCS Annex 43. This paper describes the full-scale outdoor experimental test facility, the experimental set-up and the measurements procedure...
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The
Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E
2014-03-01
To examine the accuracy of the original Mortality Probability Admission Model III, ICU Outcomes Model/National Quality Forum modification of Mortality Probability Admission Model III, and Acute Physiology and Chronic Health Evaluation IVa models for comparing observed and risk-adjusted hospital mortality predictions. Retrospective paired analyses of day 1 hospital mortality predictions using three prognostic models. Fifty-five ICUs at 38 U.S. hospitals from January 2008 to December 2012. Among 174,001 intensive care admissions, 109,926 met model inclusion criteria and 55,304 had data for mortality prediction using all three models. None. We compared patient exclusions and the discrimination, calibration, and accuracy for each model. Acute Physiology and Chronic Health Evaluation IVa excluded 10.7% of all patients, ICU Outcomes Model/National Quality Forum 20.1%, and Mortality Probability Admission Model III 24.1%. Discrimination of Acute Physiology and Chronic Health Evaluation IVa was superior with area under receiver operating curve (0.88) compared with Mortality Probability Admission Model III (0.81) and ICU Outcomes Model/National Quality Forum (0.80). Acute Physiology and Chronic Health Evaluation IVa was better calibrated (lowest Hosmer-Lemeshow statistic). The accuracy of Acute Physiology and Chronic Health Evaluation IVa was superior (adjusted Brier score = 31.0%) to that for Mortality Probability Admission Model III (16.1%) and ICU Outcomes Model/National Quality Forum (17.8%). Compared with observed mortality, Acute Physiology and Chronic Health Evaluation IVa overpredicted mortality by 1.5% and Mortality Probability Admission Model III by 3.1%; ICU Outcomes Model/National Quality Forum underpredicted mortality by 1.2%. Calibration curves showed that Acute Physiology and Chronic Health Evaluation performed well over the entire risk range, unlike the Mortality Probability Admission Model and ICU Outcomes Model/National Quality Forum models. Acute
Anuwar, Muhammad Hafidz; Jaffar, Maheran Mohd
2017-08-01
This paper provides an overview for the assessment of credit risk specific to the banks. In finance, risk is a term to reflect the potential of financial loss. The risk of default on loan may increase when a company does not make a payment on that loan when the time comes. Hence, this framework analyses the KMV-Merton model to estimate the probabilities of default for Malaysian listed companies. In this way, banks can verify the ability of companies to meet their loan commitments in order to overcome bad investments and financial losses. This model has been applied to all Malaysian listed companies in Bursa Malaysia for estimating the credit default probabilities of companies and compare with the rating given by the rating agency, which is RAM Holdings Berhad to conform to reality. Then, the significance of this study is a credit risk grade is proposed by using the KMV-Merton model for the Malaysian listed companies.
Directory of Open Access Journals (Sweden)
Dekkers Jack CM
2003-11-01
Full Text Available Abstract An increased availability of genotypes at marker loci has prompted the development of models that include the effect of individual genes. Selection based on these models is known as marker-assisted selection (MAS. MAS is known to be efficient especially for traits that have low heritability and non-additive gene action. BLUP methodology under non-additive gene action is not feasible for large inbred or crossbred pedigrees. It is easy to incorporate non-additive gene action in a finite locus model. Under such a model, the unobservable genotypic values can be predicted using the conditional mean of the genotypic values given the data. To compute this conditional mean, conditional genotype probabilities must be computed. In this study these probabilities were computed using iterative peeling, and three Markov chain Monte Carlo (MCMC methods – scalar Gibbs, blocking Gibbs, and a sampler that combines the Elston Stewart algorithm with iterative peeling (ESIP. The performance of these four methods was assessed using simulated data. For pedigrees with loops, iterative peeling fails to provide accurate genotype probability estimates for some pedigree members. Also, computing time is exponentially related to the number of loci in the model. For MCMC methods, a linear relationship can be maintained by sampling genotypes one locus at a time. Out of the three MCMC methods considered, ESIP, performed the best while scalar Gibbs performed the worst.
Fate modelling of chemical compounds with incomplete data sets
DEFF Research Database (Denmark)
Birkved, Morten; Heijungs, Reinout
2011-01-01
, and to provide simplified proxies for the more complicated “real”model relationships. In the presented study two approaches for the reduction of the data demand associated with characterization of chemical emissions in USEtoxTM are tested: The first approach yields a simplified set of mode of entry specific meta......Impact assessment of chemical compounds in Life Cycle Impact Assessment (LCIA) and Environmental Risk Assessment (ERA) requires a vast amount of data on the properties of the chemical compounds being assessed. These data are used in multi-media fate and exposure models, to calculate risk levels...... in an approximate way. The idea is that not all data needed in a multi-media fate and exposure model are completely independent and equally important, but that there are physical-chemical and biological relationships between sets of chemical properties. A statistical model is constructed to underpin this assumption...
Time domain series system definition and gear set reliability modeling
International Nuclear Information System (INIS)
Xie, Liyang; Wu, Ningxiang; Qian, Wenxue
2016-01-01
Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.
Characterizations of identified sets delivered by structural econometric models
Chesher, Andrew; Rosen, Adam M.
2016-01-01
This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous and/or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models generalize t...
Directory of Open Access Journals (Sweden)
Hong-fu Guo
2017-01-01
Full Text Available Particle size and distribution play an important role in ignition. The size and distribution of the cyclotetramethylene tetranitramine (HMX particles were investigated by Laser Particle Size Analyzer Malvern MS2000 before experiment and calculation. The mean size of particles is 161 μm. Minimum and maximum sizes are 80 μm and 263 μm, respectively. The distribution function is like a quadratic function. Based on the distribution of micron scale explosive particles, a microscopic model is established to describe the process of ignition of HMX particles under drop weight. Both temperature of contact zones and ignition probability of powder explosive can be predicted. The calculated results show that the temperature of the contact zones between the particles and the drop weight surface increases faster and higher than that of the contact zones between two neighboring particles. For HMX particles, with all other conditions being kept constant, if the drop height is less than 0.1 m, ignition probability will be close to 0. When the drop heights are 0.2 m and 0.3 m, the ignition probability is 0.27 and 0.64, respectively, whereas when the drop height is more than 0.4 m, ignition probability will be close to 0.82. In comparison with experimental results, the two curves are reasonably close to each other, which indicates our model has a certain degree of rationality.
S Varadhan, S R
2001-01-01
This volume presents topics in probability theory covered during a first-year graduate course given at the Courant Institute of Mathematical Sciences. The necessary background material in measure theory is developed, including the standard topics, such as extension theorem, construction of measures, integration, product spaces, Radon-Nikodym theorem, and conditional expectation. In the first part of the book, characteristic functions are introduced, followed by the study of weak convergence of probability distributions. Then both the weak and strong limit theorems for sums of independent rando
A fuzzy set preference model for market share analysis
Turksen, I. B.; Willson, Ian A.
1992-01-01
Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share
OL-DEC-MDP Model for Multiagent Online Scheduling with a Time-Dependent Probability of Success
Directory of Open Access Journals (Sweden)
Cheng Zhu
2014-01-01
Full Text Available Focusing on the on-line multiagent scheduling problem, this paper considers the time-dependent probability of success and processing duration and proposes an OL-DEC-MDP (opportunity loss-decentralized Markov Decision Processes model to include opportunity loss into scheduling decision to improve overall performance. The success probability of job processing as well as the process duration is dependent on the time at which the processing is started. The probability of completing the assigned job by an agent would be higher when the process is started earlier, but the opportunity loss could also be high due to the longer engaging duration. As a result, OL-DEC-MDP model introduces a reward function considering the opportunity loss, which is estimated based on the prediction of the upcoming jobs by a sampling method on the job arrival. Heuristic strategies are introduced in computing the best starting time for an incoming job by each agent, and an incoming job will always be scheduled to the agent with the highest reward among all agents with their best starting policies. The simulation experiments show that the OL-DEC-MDP model will improve the overall scheduling performance compared with models not considering opportunity loss in heavy-loading environment.
Statistical model for degraded DNA samples and adjusted probabilities for allelic drop-out
DEFF Research Database (Denmark)
Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt
2012-01-01
Abstract DNA samples found at a scene of crime or obtained from the debris of a mass disaster accident are often subject to degradation. When using the STR DNA technology, the DNA profile is observed via a so-called electropherogram (EPG), where the alleles are identified as signal peaks above...... a certain level or above a signal to noise threshold. Degradation implies that these peak intensities decrease in strength for longer short tandem repeat (STR) sequences. Consequently, long STR loci may fail to produce peak heights above the limit of detection resulting in allelic or locus drop......-outs. In this paper, we present a method for measuring the degree of degradation of a sample and demonstrate how to incorporate this in estimating the probability of allelic drop-out. This is done by extending an existing method derived for non-degraded samples. The performance of the methodology is evaluated using...
Statistical model for degraded DNA samples and adjusted probabilities for allelic drop-out
DEFF Research Database (Denmark)
Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt
2012-01-01
DNA samples found at a scene of crime or obtained from the debris of a mass disaster accident are often subject to degradation. When using the STR DNA technology, the DNA profile is observed via a so-called electropherogram (EPG), where the alleles are identified as signal peaks above a certain...... level or above a signal to noise threshold. Degradation implies that these peak intensities decrease in strength for longer short tandem repeat (STR) sequences. Consequently, long STR loci may fail to produce peak heights above the limit of detection resulting in allelic or locus drop......-outs. In this paper, we present a method for measuring the degree of degradation of a sample and demonstrate how to incorporate this in estimating the probability of allelic drop-out. This is done by extending an existing method derived for non-degraded samples. The performance of the methodology is evaluated using...
Nguyen, Truong-Huy; El Outayek, Sarah; Lim, Sun Hee; Nguyen, Van-Thanh-Van
2017-10-01
Many probability distributions have been developed to model the annual maximum rainfall series (AMS). However, there is no general agreement as to which distribution should be used due to the lack of a suitable evaluation method. This paper presents hence a general procedure for assessing systematically the performance of ten commonly used probability distributions in rainfall frequency analyses based on their descriptive as well as predictive abilities. This assessment procedure relies on an extensive set of graphical and numerical performance criteria to identify the most suitable models that could provide the most accurate and most robust extreme rainfall estimates. The proposed systematic assessment approach has been shown to be more efficient and more robust than the traditional model selection method based on only limited goodness-of-fit criteria. To test the feasibility of the proposed procedure, an illustrative application was carried out using 5-min, 1-h, and 24-h annual maximum rainfall data from a network of 21 raingages located in the Ontario region in Canada. Results have indicated that the GEV, GNO, and PE3 models were the best models for describing the distribution of daily and sub-daily annual maximum rainfalls in this region. The GEV distribution, however, was preferred to the GNO and PE3 because it was based on a more solid theoretical basis for representing the distribution of extreme random variables.
On new cautious structural reliability models in the framework of imprecise probabilities
DEFF Research Database (Denmark)
Utkin, Lev; Kozine, Igor
2010-01-01
New imprecise structural reliability models are described in this paper. They are developed based on the imprecise Bayesian inference and are imprecise Dirichlet, imprecise negative binomial, gamma-exponential and normal models. The models are applied to computing cautious structural reliability ...
Probabilities and Predictions: Modeling the Development of Scientific Problem-Solving Skills
Stevens, Ron; Johnson, David F.; Soller, Amy
2005-01-01
The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative…
Data Set for Emperical Validation of Double Skin Facade Model
DEFF Research Database (Denmark)
Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per
2008-01-01
During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the International Energy Agency (IEA) Task 34 Annex 43. This paper describes the full-scale outdoor experimental test facility ‘the Cube', where the experiments were conducted, the experimental set-up and the measurements procedure for the data sets. The empirical data is composed for the key-functioning modes...
Directory of Open Access Journals (Sweden)
Keqin Yan
2017-01-01
Full Text Available This chapter presents a reliability study for an offshore jacket structure with emphasis on the features of nonconventional modeling. Firstly, a random set model is formulated for modeling the random waves in an ocean site. Then, a jacket structure is investigated in a pushover analysis to identify the critical wave direction and key structural elements. This is based on the ultimate base shear strength. The selected probabilistic models are adopted for the important structural members and the wave direction is specified in the weakest direction of the structure for a conservative safety analysis. The wave height model is processed in a P-box format when it is used in the numerical analysis. The models are applied to find the bounds of the failure probabilities for the jacket structure. The propagation of this wave model to the uncertainty in results is investigated in both an interval analysis and Monte Carlo simulation. The results are compared in context of information content and numerical accuracy. Further, the failure probability bounds are compared with the conventional probabilistic approach.
Salis, Michele; Arca, Bachisio; Bacciu, Valentina; Spano, Donatella; Duce, Pierpaolo; Santoni, Paul; Ager, Alan; Finney, Mark
2010-05-01
Characterizing the spatial pattern of large fire occurrence and severity is an important feature of the fire management planning in the Mediterranean region. The spatial characterization of fire probabilities, fire behavior distributions and value changes are key components for quantitative risk assessment and for prioritizing fire suppression resources, fuel treatments and law enforcement. Because of the growing wildfire severity and frequency in recent years (e.g.: Portugal, 2003 and 2005; Italy and Greece, 2007 and 2009), there is an increasing demand for models and tools that can aid in wildfire prediction and prevention. Newer wildfire simulation systems offer promise in this regard, and allow for fine scale modeling of wildfire severity and probability. Several new applications has resulted from the development of a minimum travel time (MTT) fire spread algorithm (Finney, 2002), that models the fire growth searching for the minimum time for fire to travel among nodes in a 2D network. The MTT approach makes computationally feasible to simulate thousands of fires and generate burn probability and fire severity maps over large areas. The MTT algorithm is imbedded in a number of research and fire modeling applications. High performance computers are typically used for MTT simulations, although the algorithm is also implemented in the FlamMap program (www.fire.org). In this work, we described the application of the MTT algorithm to estimate spatial patterns of burn probability and to analyze wildfire severity in three fire prone areas of the Mediterranean Basin, specifically Sardinia (Italy), Sicily (Italy) and Corsica (France) islands. We assembled fuels and topographic data for the simulations in 500 x 500 m grids for the study areas. The simulations were run using 100,000 ignitions under weather conditions that replicated severe and moderate weather conditions (97th and 70th percentile, July and August weather, 1995-2007). We used both random ignition locations
Hsieh, Chih-Sheng; Lee, Lung fei
2017-01-01
In this paper, we model network formation and network interactions under a unified framework. The key feature of our model is to allow individuals to respond to incentives stemming from interaction benefits on certain activities when they choose friends (network links), while capturing homophily in terms of unobserved characteristic variables in network formation and activities. There are two advantages of this modeling approach: first, one can evaluate whether incentives from certain interac...
2017-09-01
the operational level. The goal of this thesis is to develop a realistic model characterizing the most probable paths 2 of interest for specific...TERRAIN-BASED MOTION ESTIMATION MODEL TO PREDICT THE POSITION OF A MOVING TARGET TO ENHANCE WEAPON PROBABILITY OF KILL by Chin Beng Ang...ESTIMATION MODEL TO PREDICT THE POSITION OF A MOVING TARGET TO ENHANCE WEAPON PROBABILITY OF KILL 5. FUNDING NUMBERS 6. AUTHOR(S) Chin Beng Ang 7
Modeling of Kidney Hemodynamics: Probability-Based Topology of an Arterial Network
DEFF Research Database (Denmark)
Postnov, Dmitry D; Marsh, Donald J; Postnov, Dmitry E
2016-01-01
CT) data we develop an algorithm for generating the renal arterial network. We then introduce a mathematical model describing blood flow dynamics and nephron to nephron interaction in the network. The model includes an implementation of electrical signal propagation along a vascular wall. Simulation...
The probability of connectivity in a hyperbolic model of complex networks
Bode, Michel; Fountoulakis, Nikolaos; Müller, Tobias
2016-01-01
We consider a model for complex networks that was introduced by Krioukov et al. (Phys Rev E 82 (2010) 036106). In this model, N points are chosen randomly inside a disk on the hyperbolic plane according to a distorted version of the uniform distribution and any two of them are joined by an edge if
Interactive Modelling of Shapes Using the Level-Set Method
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Christensen, Niels Jørgen
2002-01-01
In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations that are ......In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations...... which are suitable for shape modelling are proposed. However, normally these would result in tools that would a ect the entire model. To facilitate local changes to the model, we introduce a windowing scheme which constrains the {LSM} to a ect only a small part of the model. The {LSM} based sculpting...... tools have been incorporated in our sculpting system which also includes facilities for volumetric {CSG} and several techniques for visualization....
Setting development goals using stochastic dynamical system models.
Ranganathan, Shyam; Nicolis, Stamatios C; Bali Swain, Ranjula; Sumpter, David J T
2017-01-01
The Millennium Development Goals (MDG) programme was an ambitious attempt to encourage a globalised solution to important but often-overlooked development problems. The programme led to wide-ranging development but it has also been criticised for unrealistic and arbitrary targets. In this paper, we show how country-specific development targets can be set using stochastic, dynamical system models built from historical data. In particular, we show that the MDG target of two-thirds reduction of child mortality from 1990 levels was infeasible for most countries, especially in sub-Saharan Africa. At the same time, the MDG targets were not ambitious enough for fast-developing countries such as Brazil and China. We suggest that model-based setting of country-specific targets is essential for the success of global development programmes such as the Sustainable Development Goals (SDG). This approach should provide clear, quantifiable targets for policymakers.
National Aeronautics and Space Administration — This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form...
A study of quantum mechanical probabilities in the classical Hodgkin-Huxley model.
Moradi, N; Scholkmann, F; Salari, V
2015-03-01
The Hodgkin-Huxley (HH) model is a powerful model to explain different aspects of spike generation in excitable cells. However, the HH model was proposed in 1952 when the real structure of the ion channel was unknown. It is now common knowledge that in many ion-channel proteins the flow of ions through the pore is governed by a gate, comprising a so-called "selectivity filter" inside the ion channel, which can be controlled by electrical interactions. The selectivity filter (SF) is believed to be responsible for the selection and fast conduction of particular ions across the membrane of an excitable cell. Other (generally larger) parts of the molecule such as the pore-domain gate control the access of ions to the channel protein. In fact, two types of gates are considered here for ion channels: the "external gate", which is the voltage sensitive gate, and the "internal gate" which is the selectivity filter gate (SFG). Some quantum effects are expected in the SFG due to its small dimensions, which may play an important role in the operation of an ion channel. Here, we examine parameters in a generalized model of HH to see whether any parameter affects the spike generation. Our results indicate that the previously suggested semi-quantum-classical equation proposed by Bernroider and Summhammer (BS) agrees strongly with the HH equation under different conditions and may even provide a better explanation in some cases. We conclude that the BS model can refine the classical HH model substantially.
Methods of mathematical modeling using polynomials of algebra of sets
Kazanskiy, Alexandr; Kochetkov, Ivan
2018-03-01
The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.
Data Set for Emperical Validation of Double Skin Facade Model
DEFF Research Database (Denmark)
Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per
2008-01-01
During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the model is empirically validated and its' limitations for the DSF modeling are identified. Correspondingly, the existence and availability of the experimental data is very essential. Three sets of accurate empirical data for validation of DSF modeling with building simulation software were produced within...... of a double skin facade: 1. External air curtain mode, it is the naturally ventilated DSF cavity with the top and bottom openings open to the outdoor; 2. Thermal insulation mode, when all of the DSF openings closed; 3. Preheating mode, with the bottom DSF openings open to the outdoor and top openings open...
Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.
2014-01-01
Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point
International Nuclear Information System (INIS)
Moiseenko, Vitali; Battista, Jerry; Van Dyk, Jake
2000-01-01
Purpose: To evaluate the impact of dose-volume histogram (DVH) reduction schemes and models of normal tissue complication probability (NTCP) on ranking of radiation treatment plans. Methods and Materials: Data for liver complications in humans and for spinal cord in rats were used to derive input parameters of four different NTCP models. DVH reduction was performed using two schemes: 'effective volume' and 'preferred Lyman'. DVHs for competing treatment plans were derived from a sample DVH by varying dose uniformity in a high dose region so that the obtained cumulative DVHs intersected. Treatment plans were ranked according to the calculated NTCP values. Results: Whenever the preferred Lyman scheme was used to reduce the DVH, competing plans were indistinguishable as long as the mean dose was constant. The effective volume DVH reduction scheme did allow us to distinguish between these competing treatment plans. However, plan ranking depended on the radiobiological model used and its input parameters. Conclusions: Dose escalation will be a significant part of radiation treatment planning using new technologies, such as 3-D conformal radiotherapy and tomotherapy. Such dose escalation will depend on how the dose distributions in organs at risk are interpreted in terms of expected complication probabilities. The present study indicates considerable variability in predicted NTCP values because of the methods used for DVH reduction and radiobiological models and their input parameters. Animal studies and collection of standardized clinical data are needed to ascertain the effects of non-uniform dose distributions and to test the validity of the models currently in use
de Uña-Álvarez, Jacobo; Meira-Machado, Luís
2015-06-01
Multi-state models are often used for modeling complex event history data. In these models the estimation of the transition probabilities is of particular interest, since they allow for long-term predictions of the process. These quantities have been traditionally estimated by the Aalen-Johansen estimator, which is consistent if the process is Markov. Several non-Markov estimators have been proposed in the recent literature, and their superiority with respect to the Aalen-Johansen estimator has been proved in situations in which the Markov condition is strongly violated. However, the existing estimators have the drawback of requiring that the support of the censoring distribution contains the support of the lifetime distribution, which is not often the case. In this article, we propose two new methods for estimating the transition probabilities in the progressive illness-death model. Some asymptotic results are derived. The proposed estimators are consistent regardless the Markov condition and the referred assumption about the censoring support. We explore the finite sample behavior of the estimators through simulations. The main conclusion of this piece of research is that the proposed estimators are much more efficient than the existing non-Markov estimators in most cases. An application to a clinical trial on colon cancer is included. Extensions to progressive processes beyond the three-state illness-death model are discussed. © 2015, The International Biometric Society.
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Business model risk analysis: predicting the probability of business network profitability
Johnson, Pontus; Iacob, Maria Eugenia; Valja, Margus; Magnusson, Christer; Ladhe, Tobias; van Sinderen, Marten J.; Oude Luttighuis, P.H.W.M.; Folmer, Erwin Johan Albert; Bosems, S.
In the design phase of business collaboration, it is desirable to be able to predict the profitability of the business-to-be. Therefore, techniques to assess qualities such as costs, revenues, risks, and profitability have been previously proposed. However, they do not allow the modeler to properly
Litchford, Ron J.; Jeng, San-Mou
1992-01-01
The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.
Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.
2010-01-01
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…
Czech Academy of Sciences Publication Activity Database
Janíček, P.; Fuis, Vladimír; Málek, M.
2010-01-01
Roč. 14, č. 4 (2010), s. 42-51 ISSN 1335-2393 Institutional research plan: CEZ:AV0Z20760514 Keywords : computational modeling * ceramic head * in vivo destructions * hip joint endoprosthesis * probabily of rupture Subject RIV: BO - Biophysics
Allen, Jenica M; Terres, Maria A; Katsuki, Toshio; Iwamoto, Kojiro; Kobori, Hiromi; Higuchi, Hiroyoshi; Primack, Richard B; Wilson, Adam M; Gelfand, Alan; Silander, John A
2014-04-01
Understanding the drivers of phenological events is vital for forecasting species' responses to climate change. We developed flexible Bayesian survival regression models to assess a 29-year, individual-level time series of flowering phenology from four taxa of Japanese cherry trees (Prunus spachiana, Prunus × yedoensis, Prunus jamasakura, and Prunus lannesiana), from the Tama Forest Cherry Preservation Garden in Hachioji, Japan. Our modeling framework used time-varying (chill and heat units) and time-invariant (slope, aspect, and elevation) factors. We found limited differences among taxa in sensitivity to chill, but earlier flowering taxa, such as P. spachiana, were more sensitive to heat than later flowering taxa, such as P. lannesiana. Using an ensemble of three downscaled regional climate models under the A1B emissions scenario, we projected shifts in flowering timing by 2100. Projections suggest that each taxa will flower about 30 days earlier on average by 2100 with 2-6 days greater uncertainty around the species mean flowering date. Dramatic shifts in the flowering times of cherry trees may have implications for economically important cultural festivals in Japan and East Asia. The survival models used here provide a mechanistic modeling approach and are broadly applicable to any time-to-event phenological data, such as plant leafing, bird arrival time, and insect emergence. The ability to explicitly quantify uncertainty, examine phenological responses on a fine time scale, and incorporate conditions leading up to an event may provide future insight into phenologically driven changes in carbon balance and ecological mismatches of plants and pollinators in natural populations and horticultural crops. © 2013 John Wiley & Sons Ltd.
On the probability distribution of stock returns in the Mike-Farmer model
Gu, G.-F.; Zhou, W.-X.
2009-02-01
Recently, Mike and Farmer have constructed a very powerful and realistic behavioral model to mimick the dynamic process of stock price formation based on the empirical regularities of order placement and cancelation in a purely order-driven market, which can successfully reproduce the whole distribution of returns, not only the well-known power-law tails, together with several other important stylized facts. There are three key ingredients in the Mike-Farmer (MF) model: the long memory of order signs characterized by the Hurst index Hs, the distribution of relative order prices x in reference to the same best price described by a Student distribution (or Tsallis’ q-Gaussian), and the dynamics of order cancelation. They showed that different values of the Hurst index Hs and the freedom degree αx of the Student distribution can always produce power-law tails in the return distribution fr(r) with different tail exponent αr. In this paper, we study the origin of the power-law tails of the return distribution fr(r) in the MF model, based on extensive simulations with different combinations of the left part L(x) for x 0 of fx(x). We find that power-law tails appear only when L(x) has a power-law tail, no matter R(x) has a power-law tail or not. In addition, we find that the distributions of returns in the MF model at different timescales can be well modeled by the Student distributions, whose tail exponents are close to the well-known cubic law and increase with the timescale.
2014-06-30
set of methods, many of which have their origin in probability in Banach spaces , that arise across a broad range of contemporary problems in di↵erent...salesman problem, . . . • Probability in Banach spaces : probabilistic limit theorems for Banach - valued random variables, empirical processes, local...theory of Banach spaces , geometric functional analysis, convex geometry. • Mixing times and other phenomena in high-dimensional Markov chains. At
International Nuclear Information System (INIS)
Lin, Cheng; Mu, Hao; Xiong, Rui; Shen, Weixiang
2016-01-01
Highlights: • A novel multi-model probability battery SOC fusion estimation approach was proposed. • The linear matrix inequality-based H∞ technique is employed to estimate the SOC. • The Bayes theorem has been employed to realize the optimal weight for the fusion. • The robustness of the proposed approach is verified by different batteries. • The results show that the proposed method can promote global estimation accuracy. - Abstract: Due to the strong nonlinearity and complex time-variant property of batteries, the existing state of charge (SOC) estimation approaches based on a single equivalent circuit model (ECM) cannot provide the accurate SOC for the entire discharging period. This paper aims to present a novel SOC estimation approach based on a multiple ECMs fusion method for improving the practical application performance. In the proposed approach, three battery ECMs, namely the Thevenin model, the double polarization model and the 3rd order RC model, are selected to describe the dynamic voltage of lithium-ion batteries and the genetic algorithm is then used to determine the model parameters. The linear matrix inequality-based H-infinity technique is employed to estimate the SOC from the three models and the Bayes theorem-based probability method is employed to determine the optimal weights for synthesizing the SOCs estimated from the three models. Two types of lithium-ion batteries are used to verify the feasibility and robustness of the proposed approach. The results indicate that the proposed approach can improve the accuracy and reliability of the SOC estimation against uncertain battery materials and inaccurate initial states.
International Nuclear Information System (INIS)
Burgazzi, Luciano
2014-01-01
This note endeavors to address some significant issues revealed by the Fukushima accident in Japan in 2011, such as the analysis of various dependency aspects arisen in the light of the external event PSA framework, as the treatment of the correlated hazards. To this aim some foundational notions to implement the PSA models related to specific aspects, like the external hazard combination, e.g., earthquake and tsunami as at the Fukushima accident, and the external hazard-caused internal events, e.g., seismic induced fire, are proposed and discussed to be incorporated within the risk assessment structure. Risk assessment of external hazards is required and utilized as an integrated part of PRA for operating and new reactor units. In the light of the Fukushima accident, of special interest are correlated events, whose modelling is proposed in the present study, in the form of some theoretical concepts, which lay the foundations for the PSA framework implementation. An applicative example is presented for illustrative purposes, since the analysis is carried out on the basis of generic numerical values assigned to an oversimplified model and results are achieved without any baseline comparison. Obviously the first step aimed at the process endorsement is the analysis of all available information in order to determine the level of applicability of the observed specific plant site events to the envisaged model and the statistical correlation analysis for event occurrence data that can be used as part of this process. Despite these drawbacks that actually do not qualify the achieved results, the present work represents an exploratory study aimed at resolving current open issues to be resolved in the PSA, like topics related to unanticipated scenarios: the combined external hazards of the earthquake and tsunami in Fukushima, external hazards causing internal events, such as seismic induced fire. These topics are to be resolved among the other ones as emerging from the
Dean, J A; Welsh, L C; Wong, K H; Aleksic, A; Dunne, E; Islam, M R; Patel, A; Patel, P; Petkar, I; Phillips, I; Sham, J; Schick, U; Newbold, K L; Bhide, S A; Harrington, K J; Nutting, C M; Gulliford, S L
2017-04-01
A normal tissue complication probability (NTCP) model of severe acute mucositis would be highly useful to guide clinical decision making and inform radiotherapy planning. We aimed to improve upon our previous model by using a novel oral mucosal surface organ at risk (OAR) in place of an oral cavity OAR. Predictive models of severe acute mucositis were generated using radiotherapy dose to the oral cavity OAR or mucosal surface OAR and clinical data. Penalised logistic regression and random forest classification (RFC) models were generated for both OARs and compared. Internal validation was carried out with 100-iteration stratified shuffle split cross-validation, using multiple metrics to assess different aspects of model performance. Associations between treatment covariates and severe mucositis were explored using RFC feature importance. Penalised logistic regression and RFC models using the oral cavity OAR performed at least as well as the models using mucosal surface OAR. Associations between dose metrics and severe mucositis were similar between the mucosal surface and oral cavity models. The volumes of oral cavity or mucosal surface receiving intermediate and high doses were most strongly associated with severe mucositis. The simpler oral cavity OAR should be preferred over the mucosal surface OAR for NTCP modelling of severe mucositis. We recommend minimising the volume of mucosa receiving intermediate and high doses, where possible. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Bakhshandeh, Mohsen [Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Hashemi, Bijan, E-mail: bhashemi@modares.ac.ir [Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Mahdavi, Seied Rabi Mehdi [Department of Medical Physics, Faculty of Medical Sciences, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Nikoofar, Alireza; Vasheghani, Maryam [Department of Radiation Oncology, Hafte-Tir Hospital, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Kazemnejad, Anoshirvan [Department of Biostatistics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of)
2013-02-01
Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented
Nazari, Mahsa; Hashemi Nazari, Saeed; Zayeri, Farid; Gholampour Dehaki, Mehrzad; Akbarzadeh Baghban, Alireza
2018-01-30
Type 2 diabetes is a chronic metabolic disorder and one of the most common non-contagious diseases which is on the rise all over the world. The present study aims to assess the trend of change in fasting blood sugar (FBS) and factors associated with the progression and regression of type 2 diabetes. Moreover, this study estimates transition intensities and transition probabilities among various states using the multi-state Markov model. In this study Multi-Ethnic Study of Atherosclerosis (MESA) dataset, from a longitudinal study, was used. The study, at the beginning, included 6814 individuals who were followed during the five phases of the study. FBS, serving as the criterion to assess the progression of diabetes, was classified into four states including (a) normal (FBS126mg/dl). A continuous-time Markov process was used to describe the evaluation of disease changes over the four states. The model estimated the mean sojourn time for each state. Based on the results obtained from fitting the Markov model, the transition probability for a normal individual to remain in the same status over a 10-year period was 0.63, while the probability for a person in the diabetes state was 0.40. The mean sojourn time for the normal and diabetic individuals aged 45-84 years was 6.26 and 5.20 respectively. The covariates of age, race, body mass index (BMI), physical activity, waist-to-hip ratio (WHR) and blood pressure, significantly affected the progression and regression of diabetes. An increase in physical activity could be the most important factor in the regression of diabetes, while an increase in WHR and BMI could be the most significant factors in progression of the disease. Copyright © 2018 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Inferring the most probable maps of underground utilities using Bayesian mapping model
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions
International Nuclear Information System (INIS)
Li, Jun; Yim, Man-Sung; McNelis, David N.
2007-01-01
explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)
Jacobsen, J L; Saleur, H
2008-02-29
We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
International Nuclear Information System (INIS)
Wu Qiang; Cao Leituan; Fu Jiwei; Wang Jingshu; Chen Xi
2014-01-01
Subject to the limitations of funding and test conditions, number of sub-samples of complex large system test is often few, sometimes only one sub-sample is available. How to list conditions to make an accurate evaluation of the performance, reinforcement is important for complex systems. This paper presents a complex large system performance single sample assessment model. The model is input to the list sample test results under conditions of strict test magnitude and the total number of pulses is N, and certain statistical characteristics of the typical components that can be obtained by experiment. The output of the model for the single pulse conditions under the identification of the order of lower bound of the confidence of survival probability. (authors)
Czech Academy of Sciences Publication Activity Database
Hamilton, A. J.; Novotný, Vojtěch; Waters, E. K.; Basset, Y.; Benke, K. K.; Grimbacher, P. S.; Miller, S. E.; Samuelson, G. A.; Weiblen, G. D.; Yen, J. D. L.; Stork, N. E.
2013-01-01
Roč. 171, č. 2 (2013), s. 357-365 ISSN 0029-8549 R&D Projects: GA MŠk(CZ) LH11008; GA ČR GA206/09/0115 Grant - others:Czech Ministry of Education (CZ) CZ.1.07/2.3.00/20.0064; National Science Foundarion(US) DEB-0841885; Otto Kinne Foundation, Darwin Initiative(GB) 19-008 Institutional research plan: CEZ:AV0Z50070508 Institutional support: RVO:60077344 Keywords : host specificity * model * Monte Carlo Subject RIV: EH - Ecology, Behaviour Impact factor: 3.248, year: 2013 http://link.springer.com/article/10.1007%2Fs00442-012-2434-5
Survival under uncertainty an introduction to probability models of social structure and evolution
Volchenkov, Dimitri
2016-01-01
This book introduces and studies a number of stochastic models of subsistence, communication, social evolution and political transition that will allow the reader to grasp the role of uncertainty as a fundamental property of our irreversible world. At the same time, it aims to bring about a more interdisciplinary and quantitative approach across very diverse fields of research in the humanities and social sciences. Through the examples treated in this work – including anthropology, demography, migration, geopolitics, management, and bioecology, among other things – evidence is gathered to show that volatile environments may change the rules of the evolutionary selection and dynamics of any social system, creating a situation of adaptive uncertainty, in particular, whenever the rate of change of the environment exceeds the rate of adaptation. Last but not least, it is hoped that this book will contribute to the understanding that inherent randomness can also be a great opportunity – for social systems an...
Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict
Ismail, Mohd Tahir; Alias, Siti Nor Shadila
2014-07-01
For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..
Concepts of probability theory
Pfeiffer, Paul E
1979-01-01
Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.
Varouchakis, Emmanouil; Kourgialas, Nektarios; Karatzas, George; Giannakis, Georgios; Lilli, Maria; Nikolaidis, Nikolaos
2014-05-01
Riverbank erosion affects the river morphology and the local habitat and results in riparian land loss, damage to property and infrastructures, ultimately weakening flood defences. An important issue concerning riverbank erosion is the identification of the areas vulnerable to erosion, as it allows for predicting changes and assists with stream management and restoration. One way to predict the vulnerable to erosion areas is to determine the erosion probability by identifying the underlying relations between riverbank erosion and the geomorphological and/or hydrological variables that prevent or stimulate erosion. A statistical model for evaluating the probability of erosion based on a series of independent local variables and by using logistic regression is developed in this work. The main variables affecting erosion are vegetation index (stability), the presence or absence of meanders, bank material (classification), stream power, bank height, river bank slope, riverbed slope, cross section width and water velocities (Luppi et al. 2009). In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable, e.g. binary response, based on one or more predictor variables (continuous or categorical). The probabilities of the possible outcomes are modelled as a function of independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. 1 = "presence of erosion" and 0 = "no erosion") for any value of the independent variables. The regression coefficients are estimated by using maximum likelihood estimation. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding
Lin, Yi-Shin; Heinke, Dietmar; Humphreys, Glyn W
2015-04-01
In this study, we applied Bayesian-based distributional analyses to examine the shapes of response time (RT) distributions in three visual search paradigms, which varied in task difficulty. In further analyses we investigated two common observations in visual search-the effects of display size and of variations in search efficiency across different task conditions-following a design that had been used in previous studies (Palmer, Horowitz, Torralba, & Wolfe, Journal of Experimental Psychology: Human Perception and Performance, 37, 58-71, 2011; Wolfe, Palmer, & Horowitz, Vision Research, 50, 1304-1311, 2010) in which parameters of the response distributions were measured. Our study showed that the distributional parameters in an experimental condition can be reliably estimated by moderate sample sizes when Monte Carlo simulation techniques are applied. More importantly, by analyzing trial RTs, we were able to extract paradigm-dependent shape changes in the RT distributions that could be accounted for by using the EZ2 diffusion model. The study showed that Bayesian-based RT distribution analyses can provide an important means to investigate the underlying cognitive processes in search, including stimulus grouping and the bottom-up guidance of attention.
Preference Mining Using Neighborhood Rough Set Model on Two Universes.
Zeng, Kai
2016-01-01
Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.
Liang, Yingjie; Chen, Wen
2018-04-01
The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.
Penerapan Strategi Numbered Head Together dalam Setting Model Pembelajaran STAD
Directory of Open Access Journals (Sweden)
Muhammad Mifta Fausan
2016-07-01
Full Text Available This study aims to determine the increase of motivation and biology student learning outcomes through the implementation of Numbered Head Together (NHT strategies in setting Student Teams Achievement Division (STAD learning model based Lesson Study (LS. Subjects in this study were students of class X IS 1 MAN 3 Malang. Implementation this study consisted of two cycles and each cycle consisted of three meetings. The data obtained were analyzed using descriptive statistical analysis of qualitative and quantitative descriptive statistics. The research instrument used is the observation sheet, test, monitoring LS sheets and questionnaires. The results of this study indicate that through the implementation of NHT strategy in setting STAD learning model based LS can improve motivation and learning outcomes biology students. Students' motivation in the first cycle of 69% and increased in the second cycle of 86%. While the cognitive learning, classical completeness in the first cycle of 74% and increased in the second cycle to 93%. Affective learning outcomes of students in the first cycle by 93% and increased in the second cycle to 100%. Furthermore, psychomotor learning outcomes of students also increased from 74% in the first cycle to 93% in the second cycle.
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Wu, Jian-Xing; Li, Chien-Ming; Chen, Guan-Chun; Ho, Yueh-Ren; Lin, Chia-Hung
2017-04-01
Atherosclerosis and resultant peripheral arterial disease (PAD) are common complications in patients with type 2 diabetes mellitus or end-stage renal disease and in elderly patients. The prevalence of PAD is higher in patients receiving haemodialysis therapy. For early assessment of arterial occlusion using bilateral photoplethysmography (PPG), such as changes in pulse transit time and pulse shape, bilateral timing differences could be used to identify the risk level of PAD. Hence, the authors propose a discrete fractional-order integrator to calculate the bilateral area under the systolic peak (AUSP). These indices indicated the differences in both rise-timing and amplitudes of PPG signals. The dexter and sinister AUSP ratios were preliminarily used to separate the normal condition from low/high risk of PAD. Then, transition probability-based decision-making model was employed to evaluate the risk levels. The joint probability could be specified as a critical threshold, < 0.81, to identify the true positive for screening low or high risk level of PAD, referring to the patients' health records. In contrast to the bilateral timing differences and traditional methods, the proposed model showed better efficiency in PAD assessments and provided a promising strategy to be implemented in an embedded system.
Information behavior versus communication: application models in multidisciplinary settings
Directory of Open Access Journals (Sweden)
Cecília Morena Maria da Silva
2015-05-01
Full Text Available This paper deals with the information behavior as support for models of communication design in the areas of Information Science, Library and Music. The communication models proposition is based on models of Tubbs and Moss (2003, Garvey and Griffith (1972, adapted by Hurd (1996 and Wilson (1999. Therefore, the questions arose: (i what are the informational skills required of librarians who act as mediators in scholarly communication process and informational user behavior in the educational environment?; (ii what are the needs of music related researchers and as produce, seek, use and access the scientific knowledge of your area?; and (iii as the contexts involved in scientific collaboration processes influence in the scientific production of information science field in Brazil? The article includes a literature review on the information behavior and its insertion in scientific communication considering the influence of context and/or situation of the objects involved in motivating issues. The hypothesis is that the user information behavior in different contexts and situations influence the definition of a scientific communication model. Finally, it is concluded that the same concept or a set of concepts can be used in different perspectives, reaching up, thus, different results.
KFUPM-KAUST Red Sea model: Digital viscoelastic depth model and synthetic seismic data set
Al-Shuhail, Abdullatif A.
2017-06-01
The Red Sea is geologically interesting due to its unique structures and abundant mineral and petroleum resources, yet no digital geologic models or synthetic seismic data of the Red Sea are publicly available for testing algorithms to image and analyze the area\\'s interesting features. This study compiles a 2D viscoelastic model of the Red Sea and calculates a corresponding multicomponent synthetic seismic data set. The models and data sets are made publicly available for download. We hope this effort will encourage interested researchers to test their processing algorithms on this data set and model and share their results publicly as well.
Non-Rigid Object Contour Tracking via a Novel Supervised Level Set Model.
Sun, Xin; Yao, Hongxun; Zhang, Shengping; Li, Dong
2015-11-01
We present a novel approach to non-rigid objects contour tracking in this paper based on a supervised level set model (SLSM). In contrast to most existing trackers that use bounding box to specify the tracked target, the proposed method extracts the accurate contours of the target as tracking output, which achieves better description of the non-rigid objects while reduces background pollution to the target model. Moreover, conventional level set models only emphasize the regional intensity consistency and consider no priors. Differently, the curve evolution of the proposed SLSM is object-oriented and supervised by the specific knowledge of the targets we want to track. Therefore, the SLSM can ensure a more accurate convergence to the exact targets in tracking applications. In particular, we firstly construct the appearance model for the target in an online boosting manner due to its strong discriminative power between the object and the background. Then, the learnt target model is incorporated to model the probabilities of the level set contour by a Bayesian manner, leading the curve converge to the candidate region with maximum likelihood of being the target. Finally, the accurate target region qualifies the samples fed to the boosting procedure as well as the target model prepared for the next time step. We firstly describe the proposed mechanism of two-phase SLSM for single target tracking, then give its generalized multi-phase version for dealing with multi-target tracking cases. Positive decrease rate is used to adjust the learning pace over time, enabling tracking to continue under partial and total occlusion. Experimental results on a number of challenging sequences validate the effectiveness of the proposed method.
Fuzzy sets as extension of probabilistic models for evaluating human reliability
International Nuclear Information System (INIS)
Przybylski, F.
1996-11-01
On the base of a survey of established quantification methodologies for evaluating human reliability, a new computerized methodology was developed in which a differential consideration of user uncertainties is made. In this quantification method FURTHER (FUzzy Sets Related To Human Error Rate Prediction), user uncertainties are quantified separately from model and data uncertainties. As tools fuzzy sets are applied which, however, stay hidden to the method's user. The user in the quantification process only chooses an action pattern, performance shaping factors and natural language expressions. The acknowledged method HEART (Human Error Assessment and Reduction Technique) serves as foundation of the fuzzy set approach FURTHER. By means of this method, the selection of a basic task in connection with its basic error probability, the decision how correct the basic task's selection is, the selection of a peformance shaping factor, and the decision how correct the selection and how important the performance shaping factor is, were identified as aspects of fuzzification. This fuzzification is made on the base of data collection and information from literature as well as of the estimation by competent persons. To verify the ammount of additional information to be received by the usage of fuzzy sets, a benchmark session was accomplished. In this benchmark twelve actions were assessed by five test-persons. In case of the same degree of detail in the action modelling process, the bandwidths of the interpersonal evaluations decrease in FURTHER in comparison with HEART. The uncertainties of the single results could not be reduced up to now. The benchmark sessions conducted so far showed plausible results. A further testing of the fuzzy set approach by using better confirmed fuzzy sets can only be achieved in future practical application. Adequate procedures, however, are provided. (orig.) [de
Directory of Open Access Journals (Sweden)
R. Ragab
2001-01-01
Full Text Available This paper addresses the issue of "what reservoir storage capacity is required to maintain a yield with a given probability of failure?". It is an important issue in terms of construction and cost. HYDROMED offers a solution based on the modified Gould probability matrix method. This method has the advantage of sampling all years data without reference to the sequence and is therefore particularly suitable for catchments with patchy data. In the HYDROMED model, the probability of failure is calculated on a monthly basis. The model has been applied to the El-Gouazine catchment in Tunisia using a long rainfall record from Kairouan together with the estimated Hortonian runoff, class A pan evaporation data and estimated abstraction data. Generally, the probability of failure differed from winter to summer. Generally, the probability of failure approaches zero when the reservoir capacity is 500,000 m3. The 25% probability of failure (75% success is achieved with a reservoir capacity of 58,000 m3 in June and 95,000 m3 in January. The probability of failure for a 240,000 m3 capacity reservoir (closer to storage capacity of El-Gouazine 233,000 m3, is approximately 5% in November, December and January, 3% in March, and 1.1% in May and June. Consequently there is no high risk of El-Gouazine being unable to meet its requirements at a capacity of 233,000 m3. Subsequently the benefit, in terms of probability of failure, by increasing the reservoir volume of El-Gouazine to greater than the 250,000 m3 is not high. This is important for the design engineers and the funding organizations. However, the analysis is based on the existing water abstraction policy, absence of siltation rate data and on the assumption that the present climate will prevail during the lifetime of the reservoir. Should these conditions change, a new analysis should be carried out. Keywords: HYDROMED, reservoir, storage capacity, probability of failure, Mediterranean
Indian Academy of Sciences (India)
OF PROBABILITY *. The simplest laws of natural science are those that state the conditions under which some event of interest to us will either certainly occur or certainly not occur; i.e., these conditions may be expressed in one of the following two forms: 1. If a complex (i.e., a set or collection) of conditions S is realized, then.
Evolution of autocatalytic sets in a competitive percolation model
International Nuclear Information System (INIS)
Zhang, Renquan; Pei, Sen; Wei, Wei; Zheng, Zhiming
2014-01-01
The evolution of autocatalytic sets (ACSs) is a widespread process in biological, chemical, and ecological systems and is of great significance in many applications such as the evolution of new species or complex chemical organization. In this paper, we propose a competitive model with an m-selection rule in which an abrupt emergence of a macroscopic independent ACS is observed. By numerical simulations, we find that the maximal increase of the size grows linearly with the system size. We analytically derive the threshold t α where the explosive transition occurs and verify it by simulations. Moreover, our analysis explains how this giant independent ACS grows and reveals that, as the selection rule becomes stricter, the phase transition is dramatically postponed, and the number of the largest independent ACSs coexisting in the system increases accordingly. Our result indicates that suppression during evolution could lead to the abrupt appearance of giant ACSs. (paper)
Mathematical Modelling with Fuzzy Sets of Sustainable Tourism Development
Directory of Open Access Journals (Sweden)
Nenad Stojanović
2011-10-01
Full Text Available In the first part of the study we introduce fuzzy sets that correspond to comparative indicators for measuring sustainable development of tourism. In the second part of the study it is shown, on the base of model created, how one can determine the value of sustainable tourism development in protected areas based on the following established groups of indicators: to assess the economic status, to assess the impact of tourism on the social component, to assess the impact of tourism on cultural identity, to assess the environmental conditions and indicators as well as to assess tourist satisfaction, all using fuzzy logic.It is also shown how to test the confidence in the rules by which, according to experts, appropriate decisions can be created in order to protect biodiversity of protected areas.
Ivanek, R; Gröhn, Y T; Wells, M T; Lembo, A J; Sauders, B D; Wiedmann, M
2009-09-01
Many pathogens have the ability to survive and multiply in abiotic environments, representing a possible reservoir and source of human and animal exposure. Our objective was to develop a methodological framework to study spatially explicit environmental and meteorological factors affecting the probability of pathogen isolation from a location. Isolation of Listeria spp. from the natural environment was used as a model system. Logistic regression and classification tree methods were applied, and their predictive performances were compared. Analyses revealed that precipitation and occurrence of alternating freezing and thawing temperatures prior to sample collection, loam soil, water storage to a soil depth of 50 cm, slope gradient, and cardinal direction to the north are key predictors for isolation of Listeria spp. from a spatial location. Different combinations of factors affected the probability of isolation of Listeria spp. from the soil, vegetation, and water layers of a location, indicating that the three layers represent different ecological niches for Listeria spp. The predictive power of classification trees was comparable to that of logistic regression. However, the former were easier to interpret, making them more appealing for field applications. Our study demonstrates how the analysis of a pathogen's spatial distribution improves understanding of the predictors of the pathogen's presence in a particular location and could be used to propose novel control strategies to reduce human and animal environmental exposure.
International Nuclear Information System (INIS)
Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.
2012-01-01
Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011–0.013) clinical factor was “previous abdominal surgery.” As second significant (p = 0.012–0.016) factor, “cardiac history” was included in all three rectal bleeding fits, whereas including “diabetes” was significant (p = 0.039–0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003–0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D 50 . Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions
Hartmann, Stephan
2011-01-01
Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...
Directory of Open Access Journals (Sweden)
N. SHAHRAKI
2013-03-01
Full Text Available Water scarcity is a major problem in arid and semi-arid areas. The scarcity of water is further stressed by the growing demand due to increase in population growth in developing countries. Climate change and its outcomes on precipitation and water resources is the other problem in these areas. Several models are widely used for modeling daily precipitation occurrence. In this study, Markov Chain Model has been extensively used to study spell distribution. For this purpose, a day period was considered as the optimum length of time. Given the assumption that the Markov chain model is the right model for daily precipitation occurrence, the choice of Markov model order was examined on a daily basis for 4 synoptic weather stations with different climates in Iran (Gorgan, Khorram Abad, Zahedan, Tabrizduring 1978-2009. Based on probability rules, events possibility of sequential dry and wet days, these data were analyzed by stochastic process and Markov Chain method. Then probability matrix was calculated by maximum likelihood method. The possibility continuing2-5days of dry and wet days were calculated. The results showed that the probability maximum of consecutive dry period and climatic probability of dry days has occurred in Zahedan. The probability of consecutive dry period has fluctuated from 73.3 to 100 percent. Climatic probability of occurrence of dry days would change in the range of 70.96 to 100 percent with the average probability of about 90.45 percent.
Directory of Open Access Journals (Sweden)
Helli Merica
Full Text Available Little attention has gone into linking to its neuronal substrates the dynamic structure of non-rapid-eye-movement (NREM sleep, defined as the pattern of time-course power in all frequency bands across an entire episode. Using the spectral power time-courses in the sleep electroencephalogram (EEG, we showed in the typical first episode, several moves towards-and-away from deep sleep, each having an identical pattern linking the major frequency bands beta, sigma and delta. The neuronal transition probability model (NTP--in fitting the data well--successfully explained the pattern as resulting from stochastic transitions of the firing-rates of the thalamically-projecting brainstem-activating neurons, alternating between two steady dynamic-states (towards-and-away from deep sleep each initiated by a so-far unidentified flip-flop. The aims here are to identify this flip-flop and to demonstrate that the model fits well all NREM episodes, not just the first. Using published data on suprachiasmatic nucleus (SCN activity we show that the SCN has the information required to provide a threshold-triggered flip-flop for TIMING the towards-and-away alternations, information provided by sleep-relevant feedback to the SCN. NTP then determines the PATTERN of spectral power within each dynamic-state. NTP was fitted to individual NREM episodes 1-4, using data from 30 healthy subjects aged 20-30 years, and the quality of fit for each NREM measured. We show that the model fits well all NREM episodes and the best-fit probability-set is found to be effectively the same in fitting all subject data. The significant model-data agreement, the constant probability parameter and the proposed role of the SCN add considerable strength to the model. With it we link for the first time findings at cellular level and detailed time-course data at EEG level, to give a coherent picture of NREM dynamics over the entire night and over hierarchic brain levels all the way from the SCN
Probability of Ship on Collision Courses Based on the New PAW Using MMG Model and AIS Data
Directory of Open Access Journals (Sweden)
I Putu Sindhu Asmara
2015-03-01
Full Text Available This paper proposes an estimation method for ships on collision courses taking crash astern maneuvers based on a new potential area of water (PAW for maneuvering. A crash astern maneuver is an emergency option a ship can take when exposed to the risk of a collision with other ships that have lost control. However, lateral forces and yaw moments exerted by the reversing propeller, as well as the uncertainty of the initial speed and initial yaw rate, will move the ship out of the intended stopping position landing it in a dangerous area. A new PAW for crash astern maneuvers is thus introduced. The PAW is developed based on a probability density function of the initial yaw rate. Distributions of the yaw rates and speeds are analyzed from automatic identification system (AIS data in Madura Strait, and estimated paths of the maneuvers are simulated using a mathematical maneuvering group model.
López, E; Ibarz, E; Herrera, A; Puértolas, S; Gabarre, S; Más, Y; Mateo, J; Gil-Albarova, J; Gracia, L
2016-07-01
Osteoporotic vertebral fractures represent a major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture from bone mineral density (BMD) measurements. A previously developed model, based on the Damage and Fracture Mechanics, was applied for the evaluation of the mechanical magnitudes involved in the fracture process from clinical BMD measurements. BMD evolution in untreated patients and in patients with seven different treatments was analyzed from clinical studies in order to compare the variation in the risk of fracture. The predictive model was applied in a finite element simulation of the whole lumbar spine, obtaining detailed maps of damage and fracture probability, identifying high-risk local zones at vertebral body. For every vertebra, strontium ranelate exhibits the highest decrease, whereas minimum decrease is achieved with oral ibandronate. All the treatments manifest similar trends for every vertebra. Conversely, for the natural BMD evolution, as bone stiffness decreases, the mechanical damage and fracture probability show a significant increase (as it occurs in the natural history of BMD). Vertebral walls and external areas of vertebral end plates are the zones at greatest risk, in coincidence with the typical locations of osteoporotic fractures, characterized by a vertebral crushing due to the collapse of vertebral walls. This methodology could be applied for an individual patient, in order to obtain the trends corresponding to different treatments, in identifying at-risk individuals in early stages of osteoporosis and might be helpful for treatment decisions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zhai, L.
2017-12-01
Plant community can be simultaneously affected by human activities and climate changes, and quantifying and predicting this combined effect on plant community by appropriate model framework which is validated by field data is complex, but very useful to conservation management. Plant communities in the Everglades provide an unique set of conditions to develop and validate this model framework, because they are both experiencing intensive effects of human activities (such as changing hydroperiod by drainage and restoration projects, nutrients from upstream agriculture, prescribed fire, etc.) and climate changes (such as warming, changing precipitation patter, sea level rise, etc.). More importantly, previous research attention focuses on plant communities in slough ecosystem (including ridge, slough and their tree islands), very few studies consider the marl prairie ecosystem. Comparing with slough ecosystem featured by remaining consistently flooded almost year-round, marl prairie has relatively shorter hydroperiod (just in wet-season of one year). Therefore, plant communities of marl prairie may receive more impacts from hydroperiod change. In addition to hydroperiod, fire and nutrients also affect the plant communities in the marl prairie. Therefore, to quantify the combined effects of water level, fire, and nutrients on the composition of the plant communities, we are developing a joint probability method based vegetation dynamic model. Further, the model is being validated by field data about changes of vegetation assemblage along environmental gradients in the marl prairie. Our poster showed preliminary data from our current project.
Assessing the clinical probability of pulmonary embolism
International Nuclear Information System (INIS)
Miniati, M.; Pistolesi, M.
2001-01-01
% in those with low probability. The prevalence of PE in patients with intermediate clinical probability was 41%. These results underscore the importance of incorporating the standardized reading of the electrocardiogram and of the chest radiograph into the clinical evaluation of patients with suspected PE. The interpretation of these laboratory data, however, requires experience. Future research is needed to develop standardized models, of varying degree of complexity, which may find application in different clinical settings to predict the probability of PE
Gabryś, Hubert S; Buettner, Florian; Sterzing, Florian; Hauswald, Henrik; Bangert, Mark
2018-01-01
The purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP) models based on the mean radiation dose to parotid glands. A cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0-6 months (early), 6-15 months (late), 15-24 months (long-term), and at any time (a longitudinal model) after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC) of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis. NTCP models based on the parotid mean dose failed to predict xerostomia (AUCs 0.85), dose gradients in the right-left (AUCs > 0.78), and the anterior-posterior (AUCs > 0.72) direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose-volume histogram (DVH) spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing methods. We demonstrated that incorporation of organ- and dose-shape descriptors is beneficial for xerostomia prediction in highly conformal radiotherapy treatments. Due to strong reliance on patient-specific, dose-independent factors, our results underscore the need for development of personalized data-driven risk profiles for NTCP models of xerostomia. The facilitated
Directory of Open Access Journals (Sweden)
Hubert S. Gabryś
2018-03-01
Full Text Available PurposeThe purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP models based on the mean radiation dose to parotid glands.Material and methodsA cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0–6 months (early, 6–15 months (late, 15–24 months (long-term, and at any time (a longitudinal model after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis.ResultsNTCP models based on the parotid mean dose failed to predict xerostomia (AUCs < 0.60. The most informative predictors were found for late and long-term xerostomia. Late xerostomia correlated with the contralateral dose gradient in the anterior–posterior (AUC = 0.72 and the right–left (AUC = 0.68 direction, whereas long-term xerostomia was associated with parotid volumes (AUCs > 0.85, dose gradients in the right–left (AUCs > 0.78, and the anterior–posterior (AUCs > 0.72 direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose–volume histogram (DVH spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing
Energy Technology Data Exchange (ETDEWEB)
Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru, E-mail: rkikuchi@edge.ifs.tohoku.ac.jp [Institute of Fluid Science, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai, Miyagi 980-8577 (Japan)
2015-10-15
An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier–Stokes equations. (paper)
Butler, Doug; Bauman, David; Johnson-Throop, Kathy
2011-01-01
The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.
International Nuclear Information System (INIS)
Levegruen, Sabine; Jackson, Andrew; Zelefsky, Michael J.; Venkatraman, Ennapadam S.; Skwarchuk, Mark W.; Schlegel, Wolfgang; Fuks, Zvi; Leibel, Steven A.; Ling, C. Clifton
2000-01-01
Purpose: To investigate tumor control following three-dimensional conformal radiation therapy (3D-CRT) of prostate cancer and to identify dose-distribution variables that correlate with local control assessed through posttreatment prostate biopsies. Methods and Material: Data from 132 patients, treated at Memorial Sloan-Kettering Cancer Center (MSKCC), who had a prostate biopsy 2.5 years or more after 3D-CRT for T1c-T3 prostate cancer with prescription doses of 64.8-81 Gy were analyzed. Variables derived from the dose distribution in the PTV included: minimum dose (Dmin), maximum dose (Dmax), mean dose (Dmean), dose to n% of the PTV (Dn), where n = 1%, ..., 99%. The concept of the equivalent uniform dose (EUD) was evaluated for different values of the surviving fraction at 2 Gy (SF 2 ). Four tumor control probability (TCP) models (one phenomenologic model using a logistic function and three Poisson cell kill models) were investigated using two sets of input parameters, one for low and one for high T-stage tumors. Application of both sets to all patients was also investigated. In addition, several tumor-related prognostic variables were examined (including T-stage, Gleason score). Univariate and multivariate logistic regression analyses were performed. The ability of the logistic regression models (univariate and multivariate) to predict the biopsy result correctly was tested by performing cross-validation analyses and evaluating the results in terms of receiver operating characteristic (ROC) curves. Results: In univariate analysis, prescription dose (Dprescr), Dmax, Dmean, dose to n% of the PTV with n of 70% or less correlate with outcome (p 2 : EUD correlates significantly with outcome for SF 2 of 0.4 or more, but not for lower SF 2 values. Using either of the two input parameters sets, all TCP models correlate with outcome (p 2 , is limited because the low dose region may not coincide with the tumor location. Instead, for MSKCC prostate cancer patients with their
Introduction to probability and statistics for science, engineering, and finance
Rosenkrantz, Walter A
2008-01-01
Data Analysis Orientation The Role and Scope of Statistics in Science and Engineering Types of Data: Examples from Engineering, Public Health, and Finance The Frequency Distribution of a Variable Defined on a Population Quantiles of a Distribution Measures of Location (Central Value) and Variability Covariance, Correlation, and Regression: Computing a Stock's Beta Mathematical Details and Derivations Large Data Sets Probability Theory Orientation Sample Space, Events, Axioms of Probability Theory Mathematical Models of Random Sampling Conditional Probability and Baye
The perception of probability.
Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E
2014-01-01
We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Models of Music Therapy Intervention in School Settings
Wilson, Brian L., Ed.
2002-01-01
This completely revised 2nd edition edited by Brian L. Wilson, addresses both theoretical issues and practical applications of music therapy in educational settings. 17 chapters written by a variety of authors, each dealing with a different setting or issue. A valuable resource for demonstrating the efficacy of music therapy to school…
Chapman, Brian
2017-06-01
This paper seeks to develop a more thermodynamically sound pedagogy for students of biological transport than is currently available from either of the competing schools of linear non-equilibrium thermodynamics (LNET) or Michaelis-Menten kinetics (MMK). To this end, a minimal model of facilitated diffusion was constructed comprising four reversible steps: cis- substrate binding, cis → trans bound enzyme shuttling, trans -substrate dissociation and trans → cis free enzyme shuttling. All model parameters were subject to the second law constraint of the probability isotherm, which determined the unidirectional and net rates for each step and for the overall reaction through the law of mass action. Rapid equilibration scenarios require sensitive 'tuning' of the thermodynamic binding parameters to the equilibrium substrate concentration. All non-equilibrium scenarios show sigmoidal force-flux relations, with only a minority of cases having their quasi -linear portions close to equilibrium. Few cases fulfil the expectations of MMK relating reaction rates to enzyme saturation. This new approach illuminates and extends the concept of rate-limiting steps by focusing on the free energy dissipation associated with each reaction step and thereby deducing its respective relative chemical impedance. The crucial importance of an enzyme's being thermodynamically 'tuned' to its particular task, dependent on the cis- and trans- substrate concentrations with which it deals, is consistent with the occurrence of numerous isoforms for enzymes that transport a given substrate in physiologically different circumstances. This approach to kinetic modelling, being aligned with neither MMK nor LNET, is best described as intuitive non-equilibrium thermodynamics, and is recommended as a useful adjunct to the design and interpretation of experiments in biotransport.
International Nuclear Information System (INIS)
Kosterev, V.V.; Boliatko, V.V.; Gusev, S.M.; Panin, M.P.; Averkin, A.N.
1998-01-01
Computer software for risk assessment of transportation of important freight has been developed. It incorporates models of transport accidents, including terrorist attacks. These models use, among the others, input data of cartographic character. Geographic information system technology and electronic maps of a geographic area are involved as an instrument for handling this kind of data. Fuzzy set theory methods as well as standard methods of probability theory have been used for quantitative risk assessment. Fuzzy algebraic operations and their computer realization are discussed. Risk assessment for one particular route of railway transportation is given as an example. (author)
Olariu, Elena; Cadwell, Kevin K; Hancock, Elizabeth; Trueman, David; Chevrou-Severac, Helene
2017-01-01
Although Markov cohort models represent one of the most common forms of decision-analytic models used in health care decision-making, correct implementation of such models requires reliable estimation of transition probabilities. This study sought to identify consensus statements or guidelines that detail how such transition probability matrices should be estimated. A literature review was performed to identify relevant publications in the following databases: Medline, Embase, the Cochrane Library, and PubMed. Electronic searches were supplemented by manual-searches of health technology assessment (HTA) websites in Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and the UK. One reviewer assessed studies for eligibility. Of the 1,931 citations identified in the electronic searches, no studies met the inclusion criteria for full-text review, and no guidelines on transition probabilities in Markov models were identified. Manual-searching of the websites of HTA agencies identified ten guidelines on economic evaluations (Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and UK). All identified guidelines provided general guidance on how to develop economic models, but none provided guidance on the calculation of transition probabilities. One relevant publication was identified following review of the reference lists of HTA agency guidelines: the International Society for Pharmacoeconomics and Outcomes Research taskforce guidance. This provided limited guidance on the use of rates and probabilities. There is limited formal guidance available on the estimation of transition probabilities for use in decision-analytic models. Given the increasing importance of cost-effectiveness analysis in the decision-making processes of HTA bodies and other medical decision-makers, there is a need for additional guidance to inform a more consistent approach to decision-analytic modeling. Further research should be done to
Collision Probability Analysis
DEFF Research Database (Denmark)
Hansen, Peter Friis; Pedersen, Preben Terndrup
1998-01-01
It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.
2014-01-01
that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.
that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
Wei Wu; Charlesb Hall; Lianjun Zhang
2006-01-01
We predicted the spatial pattern of hourly probability of cloud cover in the Luquillo Experimental Forest (LEF) in North-Eastern Puerto Rico using four different models. The probability of cloud cover (defined as âthe percentage of the area covered by clouds in each pixel on the mapâ in this paper) at any hour and any place is a function of three topographic variables...
Coverage Probability of Random Intervals
Chen, Xinjia
2007-01-01
In this paper, we develop a general theory on the coverage probability of random intervals defined in terms of discrete random variables with continuous parameter spaces. The theory shows that the minimum coverage probabilities of random intervals with respect to corresponding parameters are achieved at discrete finite sets and that the coverage probabilities are continuous and unimodal when parameters are varying in between interval endpoints. The theory applies to common important discrete ...
Modeling Unobserved Consideration Sets for Household Panel Data
J.E.M. van Nierop; R. Paap (Richard); B. Bronnenberg; Ph.H.B.F. Franses (Philip Hans)
2000-01-01
textabstractWe propose a new method to model consumers' consideration and choice processes. We develop a parsimonious probit type model for consideration and a multinomial probit model for choice, given consideration. Unlike earlier models of consideration ours is not prone to the curse of
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2010-01-01
This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications...
International Nuclear Information System (INIS)
Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis
2012-01-01
Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small
Shoham, Naama; Gefen, Amit
2011-10-01
3T3-L1 preadipocytes are often being used in research of adipose-related diseases such as obesity, insulin resistance, and hyperlipidemia. We developed a stochastic model that simulated differentiation of four 3T3-L1 culture conditions distinct by the insulin concentration in the differentiation medium (2.5, 5, 7.5, or 10 μg/mL). The model simulated culture behavior and the accumulation of lipid droplets in the maturing cells from the day of induction of differentiation through 28 days after that. The cellular processes including cell adhesion, mitosis, growing after undergoing mitosis, commitment to the adipocyte lineage, and apoptosis were referred to as stochastic events in the modeling. By minimizing the error between our model and experimental results, we found that the probability for becoming committed to the adipocyte lineage in a single division and the probability for growing after undergoing mitosis were 0.02 and 0.8, respectively, regardless of the insulin concentration. The probability for undergoing mitosis was equal to 0.2 and 0.4 in cultures that had insulin concentrations of 2.5 and 5-10 μg/mL in the differentiation medium, respectively; hence the insulin concentration affected the probability for mitosis in the 3T3-L1 cells. The model and resulted probabilities now allow quantitative and visual predictions of adipogenesis in 3T3-L1 cultures, toward computational design of cell culturing protocols.
Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model
International Nuclear Information System (INIS)
Schindler, R.E.
1996-09-01
The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes
Upgrading Probability via Fractions of Events
Directory of Open Access Journals (Sweden)
Frič Roman
2016-08-01
Full Text Available The influence of “Grundbegriffe” by A. N. Kolmogorov (published in 1933 on education in the area of probability and its impact on research in stochastics cannot be overestimated. We would like to point out three aspects of the classical probability theory “calling for” an upgrade: (i classical random events are black-and-white (Boolean; (ii classical random variables do not model quantum phenomena; (iii basic maps (probability measures and observables { dual maps to random variables have very different “mathematical nature”. Accordingly, we propose an upgraded probability theory based on Łukasiewicz operations (multivalued logic on events, elementary category theory, and covering the classical probability theory as a special case. The upgrade can be compared to replacing calculations with integers by calculations with rational (and real numbers. Namely, to avoid the three objections, we embed the classical (Boolean random events (represented by the f0; 1g-valued indicator functions of sets into upgraded random events (represented by measurable {0; 1}-valued functions, the minimal domain of probability containing “fractions” of classical random events, and we upgrade the notions of probability measure and random variable.
Jung, Yoon Suk; Park, Chan Hyuk; Kim, Nam Hee; Lee, Mi Yeon; Park, Dong Il
2017-09-01
The incidence of colorectal cancer is decreasing in adults aged ≥50 years and increasing in those aged probability models for ACRN in a population aged 30-49 years. Of 96,235 participants, 57,635 and 38,600 were included in the derivation and validation cohorts, respectively. The predicted probability model considered age, sex, body mass index, family history of colorectal cancer, and smoking habits, as follows: Y ACRN = -8.755 + 0.080·X age - 0.055·X male + 0.041·X BMI + 0.200·X family_history_of_CRC + 0.218·X former_smoker + 0.644·X current_smoker . The optimal cutoff value for the predicted probability of ACRN by Youden index was 1.14%. The area under the receiver-operating characteristic curve (AUROC) values of our model for ACRN were higher than those of the previously established Asia-Pacific Colorectal Screening (APCS), Korean Colorectal Screening (KCS), and Kaminski's scoring models [AUROC (95% confidence interval): model in the current study, 0.673 (0.648-0.697); vs. APCS, 0.588 (0.564-0.611), P probability model can assess the risk of ACRN more accurately than existing models, including the APCS, KCS, and Kaminski's scoring models.
Lee, Tsair-Fwu; Chao, Pei-Ju; Chang, Liyun; Ting, Hui-Min; Huang, Yu-Jie
2015-01-01
Symptomatic radiation pneumonitis (SRP), which decreases quality of life (QoL), is the most common pulmonary complication in patients receiving breast irradiation. If it occurs, acute SRP usually develops 4-12 weeks after completion of radiotherapy and presents as a dry cough, dyspnea and low-grade fever. If the incidence of SRP is reduced, not only the QoL but also the compliance of breast cancer patients may be improved. Therefore, we investigated the incidence SRP in breast cancer patients after hybrid intensity modulated radiotherapy (IMRT) to find the risk factors, which may have important effects on the risk of radiation-induced complications. In total, 93 patients with breast cancer were evaluated. The final endpoint for acute SRP was defined as those who had density changes together with symptoms, as measured using computed tomography. The risk factors for a multivariate normal tissue complication probability model of SRP were determined using the least absolute shrinkage and selection operator (LASSO) technique. Five risk factors were selected using LASSO: the percentage of the ipsilateral lung volume that received more than 20-Gy (IV20), energy, age, body mass index (BMI) and T stage. Positive associations were demonstrated among the incidence of SRP, IV20, and patient age. Energy, BMI and T stage showed a negative association with the incidence of SRP. Our analyses indicate that the risk of SPR following hybrid IMRT in elderly or low-BMI breast cancer patients is increased once the percentage of the ipsilateral lung volume receiving more than 20-Gy is controlled below a limitation. We suggest to define a dose-volume percentage constraint of IV20radiation therapy treatment planning to maintain the incidence of SPR below 20%, and pay attention to the sequelae especially in elderly or low-BMI breast cancer patients. (AIV20: the absolute ipsilateral lung volume that received more than 20 Gy (cc).
Setting Generality of Peer Modeling in Children with Autism.
Carr, Edward G.; Darcy, Michael
1990-01-01
Four preschool children with autism played "Follow-the-Leader," in which a normal peer demonstrated and physically prompted a variety of actions and object manipulations that defined the activity. Following training, all four subjects generalized their imitative skill to a new setting involving new actions and object manipulations. (Author/JDD)
Formal Modeling in a Commercial Setting: A Case Study
Wong, A.; Chechik, M.
1999-01-01
This paper describes a case study conducted in collaboration with Nortel to demonstrate the feasibility of applying formal modeling techniques to telecommunication systems. A formal description language, SDL, was chosen by our qualitative CASE tool evaluation to model a multimedia-messaging system described by an 80-page natural language specification. Our model was used to identify errors in the software requirements document and to derive test suites, shadowing the existing development proc...
Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu
2017-11-01
Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with
Directory of Open Access Journals (Sweden)
Olariu E
2017-09-01
Full Text Available Elena Olariu,1 Kevin K Cadwell,1 Elizabeth Hancock,1 David Trueman,1 Helene Chevrou-Severac2 1PHMR Ltd, London, UK; 2Takeda Pharmaceuticals International AG, Zurich, Switzerland Objective: Although Markov cohort models represent one of the most common forms of decision-analytic models used in health care decision-making, correct implementation of such models requires reliable estimation of transition probabilities. This study sought to identify consensus statements or guidelines that detail how such transition probability matrices should be estimated. Methods: A literature review was performed to identify relevant publications in the following databases: Medline, Embase, the Cochrane Library, and PubMed. Electronic searches were supplemented by manual-searches of health technology assessment (HTA websites in Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and the UK. One reviewer assessed studies for eligibility. Results: Of the 1,931 citations identified in the electronic searches, no studies met the inclusion criteria for full-text review, and no guidelines on transition probabilities in Markov models were identified. Manual-searching of the websites of HTA agencies identified ten guidelines on economic evaluations (Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and UK. All identified guidelines provided general guidance on how to develop economic models, but none provided guidance on the calculation of transition probabilities. One relevant publication was identified following review of the reference lists of HTA agency guidelines: the International Society for Pharmacoeconomics and Outcomes Research taskforce guidance. This provided limited guidance on the use of rates and probabilities. Conclusions: There is limited formal guidance available on the estimation of transition probabilities for use in decision-analytic models. Given the increasing importance of cost
Paired fuzzy sets and other opposite-based models
DEFF Research Database (Denmark)
Montero, Javier; Gómez, Daniel; Tinguaro Rodríguez, J.
2016-01-01
In this paper we stress the relevance of those fuzzy models that impose a couple of simultaneous views in order to represent concepts. In particular, we point out that the basic model to start with should contain at least two somehow opposite valuations plus a number of neutral concepts that are ......In this paper we stress the relevance of those fuzzy models that impose a couple of simultaneous views in order to represent concepts. In particular, we point out that the basic model to start with should contain at least two somehow opposite valuations plus a number of neutral concepts...... that are generated from the semantic relationship between those two opposites. Such a basic model should be distinguished from some other similar approaches that can be found in the literature, and that may bring some difficulties in intuition, partially because of their denomination. The general term “paired fuzzy...
International Nuclear Information System (INIS)
Xu ZhiYong; Liang Shixiong; Zhu Ji; Zhu Xiaodong; Zhao Jiandong; Lu Haijie; Yang Yunli; Chen Long; Wang Anyu; Fu Xiaolong; Jiang Guoliang
2006-01-01
Purpose: To describe the probability of RILD by application of the Lyman-Kutcher-Burman normal-tissue complication (NTCP) model for primary liver carcinoma (PLC) treated with hypofractionated three-dimensional conformal radiotherapy (3D-CRT). Methods and Materials: A total of 109 PLC patients treated by 3D-CRT were followed for RILD. Of these patients, 93 were in liver cirrhosis of Child-Pugh Grade A, and 16 were in Child-Pugh Grade B. The Michigan NTCP model was used to predict the probability of RILD, and then the modified Lyman NTCP model was generated for Child-Pugh A and Child-Pugh B patients by maximum-likelihood analysis. Results: Of all patients, 17 developed RILD in which 8 were of Child-Pugh Grade A, and 9 were of Child-Pugh Grade B. The prediction of RILD by the Michigan model was underestimated for PLC patients. The modified n, m, TD 5 (1) were 1.1, 0.28, and 40.5 Gy and 0.7, 0.43, and 23 Gy for patients with Child-Pugh A and B, respectively, which yielded better estimations of RILD probability. The hepatic tolerable doses (TD 5 ) would be MDTNL of 21 Gy and 6 Gy, respectively, for Child-Pugh A and B patients. Conclusions: The Michigan model was probably not fit to predict RILD in PLC patients. A modified Lyman NTCP model for RILD was recommended
Modeling a set of heavy oil aqueous pyrolysis experiments
Energy Technology Data Exchange (ETDEWEB)
Thorsness, C.B.; Reynolds, J.G.
1996-11-01
Aqueous pyrolysis experiments, aimed at mild upgrading of heavy oil, were analyzed using various computer models. The primary focus of the analysis was the pressure history of the closed autoclave reactors obtained during the heating of the autoclave to desired reaction temperatures. The models used included a means of estimating nonideal behavior of primary components with regard to vapor liquid equilibrium. The modeling indicated that to match measured autoclave pressures, which often were well below the vapor pressure of water at a given temperature, it was necessary to incorporate water solubility in the oil phase and an activity model for the water in the oil phase which reduced its fugacity below that of pure water. Analysis also indicated that the mild to moderate upgrading of the oil which occurred in experiments that reached 400{degrees}C or more using a FE(III) 2-ethylhexanoate could be reasonably well characterized by a simple first order rate constant of 1.7xl0{sup 8} exp(-20000/T)s{sup {minus}l}. Both gas production and API gravity increase were characterized by this rate constant. Models were able to match the complete pressure history of the autoclave experiments fairly well with relatively simple equilibria models. However, a consistent lower than measured buildup in pressure at peak temperatures was noted in the model calculations. This phenomena was tentatively attributed to an increase in the amount of water entering the vapor phase caused by a change in its activity in the oil phase.
Paired fuzzy sets and other opposite-based models
DEFF Research Database (Denmark)
Montero, Javier; Gómez, Daniel; Tinguaro Rodríguez, J.
2016-01-01
In this paper we stress the relevance of those fuzzy models that impose a couple of simultaneous views in order to represent concepts. In particular, we point out that the basic model to start with should contain at least two somehow opposite valuations plus a number of neutral concepts that are ...... sets” is then proposed together with the notion of sub-antonym, to be considered as a particular case of opposition relationship....
WE-AB-202-10: Modelling Individual Tumor-Specific Control Probability for Hypoxia in Rectal Cancer
Energy Technology Data Exchange (ETDEWEB)
Warren, S; Warren, DR; Wilson, JM; Muirhead, R; Hawkins, MA; Maughan, T; Partridge, M [University of Oxford, Oxford, Oxfordshire (United Kingdom)
2016-06-15
Purpose: To investigate hypoxia-guided dose-boosting for increased tumour control and improved normal tissue sparing using FMISO-PET images Methods: Individual tumor-specific control probability (iTSCP) was calculated using a modified linear-quadratic model with rectal-specific radiosensitivity parameters for three limiting-case assumptions of the hypoxia / FMISO uptake relationship. {sup 18}FMISO-PET images from 2 patients (T3N0M0) from the RHYTHM trial (Investigating Hypoxia in Rectal Tumours NCT02157246) were chosen to delineate a hypoxic region (GTV-MISO defined as tumor-to-muscle ratio > 1.3) within the anatomical GTV. Three VMAT treatment plans were created in Eclipse (Varian): STANDARD (45Gy / 25 fractions to PTV4500); BOOST-GTV (simultaneous integrated boost of 60Gy / 25fr to GTV +0.5cm) and BOOST-MISO (60Gy / 25fr to GTV-MISO+0.5cm). GTV mean dose (in EQD2), iTSCP and normal tissue dose-volume metrics (small bowel, bladder, anus, and femoral heads) were recorded. Results: Patient A showed small hypoxic volume (15.8% of GTV) and Patient B moderate hypoxic volume (40.2% of GTV). Dose escalation to 60Gy was achievable, and doses to femoral heads and small bowel in BOOST plans were comparable to STANDARD plans. For patient A, a reduced maximum bladder dose was observed in BOOST-MISO compared to BOOST-GTV (D0.1cc 49.2Gy vs 54.0Gy). For patient B, a smaller high dose volume was observed for the anus region in BOOST-MISO compared to BOOST-GTV (V55Gy 19.9% vs 100%), which could potentially reduce symptoms of fecal incontinence. For BOOST-MISO, the largest iTSCPs (A: 95.5% / B: 90.0%) assumed local correlation between FMISO uptake and hypoxia, and approached iTSCP values seen for BOOST-GTV (A: 96.1% / B: 90.5%). Conclusion: Hypoxia-guided dose-boosting is predicted to improve local control in rectal tumors when FMISO is spatially correlated to hypoxia, and to reduce dose to organs-at-risk compared to boosting the whole GTV. This could lead to organ
Directory of Open Access Journals (Sweden)
Tsair-Fwu Lee
Full Text Available Symptomatic radiation pneumonitis (SRP, which decreases quality of life (QoL, is the most common pulmonary complication in patients receiving breast irradiation. If it occurs, acute SRP usually develops 4-12 weeks after completion of radiotherapy and presents as a dry cough, dyspnea and low-grade fever. If the incidence of SRP is reduced, not only the QoL but also the compliance of breast cancer patients may be improved. Therefore, we investigated the incidence SRP in breast cancer patients after hybrid intensity modulated radiotherapy (IMRT to find the risk factors, which may have important effects on the risk of radiation-induced complications.In total, 93 patients with breast cancer were evaluated. The final endpoint for acute SRP was defined as those who had density changes together with symptoms, as measured using computed tomography. The risk factors for a multivariate normal tissue complication probability model of SRP were determined using the least absolute shrinkage and selection operator (LASSO technique.Five risk factors were selected using LASSO: the percentage of the ipsilateral lung volume that received more than 20-Gy (IV20, energy, age, body mass index (BMI and T stage. Positive associations were demonstrated among the incidence of SRP, IV20, and patient age. Energy, BMI and T stage showed a negative association with the incidence of SRP. Our analyses indicate that the risk of SPR following hybrid IMRT in elderly or low-BMI breast cancer patients is increased once the percentage of the ipsilateral lung volume receiving more than 20-Gy is controlled below a limitation.We suggest to define a dose-volume percentage constraint of IV20< 37% (or AIV20< 310cc for the irradiated ipsilateral lung in radiation therapy treatment planning to maintain the incidence of SPR below 20%, and pay attention to the sequelae especially in elderly or low-BMI breast cancer patients. (AIV20: the absolute ipsilateral lung volume that received more than
Directory of Open Access Journals (Sweden)
H. Zhong
2013-07-01
Full Text Available The Lower Rhine Delta, a transitional area between the River Rhine and Meuse and the North Sea, is at risk of flooding induced by infrequent events of a storm surge or upstream flooding, or by more infrequent events of a combination of both. A joint probability analysis of the astronomical tide, the wind induced storm surge, the Rhine flow and the Meuse flow at the boundaries is established in order to produce the joint probability distribution of potential flood events. Three individual joint probability distributions are established corresponding to three potential flooding causes: storm surges and normal Rhine discharges, normal sea levels and high Rhine discharges, and storm surges and high Rhine discharges. For each category, its corresponding joint probability distribution is applied, in order to stochastically simulate a large number of scenarios. These scenarios can be used as inputs to a deterministic 1-D hydrodynamic model in order to estimate the high water level frequency curves at the transitional locations. The results present the exceedance probability of the present design water level for the economically important cities of Rotterdam and Dordrecht. The calculated exceedance probability is evaluated and compared to the governmental norm. Moreover, the impact of climate change on the high water level frequency curves is quantified for the year 2050 in order to assist in decisions regarding the adaptation of the operational water management system and the flood defense system.
Dida, Nagasa; Birhanu, Zewdie; Gerbaba, Mulusew; Tilahun, Dejen; Morankar, Sudhakar
2014-06-01
Although ante natal care and institutional delivery is effective means for reducing maternal morbidity and mortality, the probability of giving birth at health institutions among ante natal care attendants has not been modeled in Ethiopia. Therefore, the objective of this study was to model predictors of giving birth at health institutions among expectant mothers following antenatal care. Facility based cross sectional study design was conducted among 322 consecutively selected mothers who were following ante natal care in two districts of West Shewa Zone, Oromia Regional State, Ethiopia. Participants were proportionally recruited from six health institutions. The data were analyzed using SPSS version 17.0. Multivariable logistic regression was employed to develop the prediction model. The final regression model had good discrimination power (89.2%), optimum sensitivity (89.0%) and specificity (80.0%) to predict the probability of giving birth at health institutions. Accordingly, self efficacy (beta=0.41), perceived barrier (beta=-0.31) and perceived susceptibility (beta=0.29) were significantly predicted the probability of giving birth at health institutions. The present study showed that logistic regression model has predicted the probability of giving birth at health institutions and identified significant predictors which health care providers should take into account in promotion of institutional delivery.
Measurement uncertainty and probability
National Research Council Canada - National Science Library
Willink, Robin
2013-01-01
... and probability models 3.4 Inference and confidence 3.5 Two central limit theorems 3.6 The Monte Carlo method and process simulation 4 The randomization of systematic errors page xi xii 3 3 5 7 10 12 16 19 21 21 23 28 30 32 33 39 43 45 52 53 56 viiviii 4.1 4.2 4.3 4.4 4.5 Contents The Working Group of 1980 From classical repetition to practica...
Model for setting priority construction project objectives aligned with ...
African Journals Online (AJOL)
A comprehensive model based on priority project objectives aligned with monetary incentives, and agreed upon by built environment stakeholders was developed. A web survey was adopted to send out a questionnaire to nationwide participants, including contractors, quantity surveyors, project managers, architects, and ...
Leader Succession: A Model and Review for School Settings.
Miskel, Cecil; Cosgrove, Dorothy
Recent research casts doubt on the commonly held notions that administrators affect student learning through instructional leadership and that changing administrators will improve school performance. To help construct a model for examining the process of leader succession that specifies a number of major school process and outcome variables…
Setting up measuring campaigns for integrated wastewater modelling
DEFF Research Database (Denmark)
Vanrolleghem, P.A.; Schilling, W.; Rauch, Wolfgang
1999-01-01
The steps of calibration/confirmation of models in a suggested Ii-step procedure far analysis, planning and implementation of integrated urban wastewater management systems is focused upon in this paper. Based on ample experience obtained in comprehensive investigations throughout Europe...
Increasing Free Throw Accuracy through Behavior Modeling and Goal Setting.
Erffmeyer, Elizabeth S.
A two-year behavior-modeling training program focusing on attention processes, retention processes, motor reproduction, and motivation processes was implemented to increase the accuracy of free throw shooting for a varsity intercollegiate women's basketball team. The training included specific learning keys, progressive relaxation, mental…
So, Tak-Shing Harry; Peng, Chao-Ying Joanne
This study compared the accuracy of predicting two-group membership obtained from K-means clustering with those derived from linear probability modeling, linear discriminant function, and logistic regression under various data properties. Multivariate normally distributed populations were simulated based on combinations of population proportions,…
SET-MM – A Software Evaluation Technology Maturity Model
García-Castro, Raúl
2011-01-01
The application of software evaluation technologies in different research fields to verify and validate research is a key factor in the progressive evolution of those fields. Nowadays, however, to have a clear picture of the maturity of the technologies used in evaluations or to know which steps to follow in order to improve the maturity of such technologies is not easy. This paper describes a Software Evaluation Technology Maturity Model that can be used to assess software evaluation tech...
Single High Fidelity Geometric Data Sets for LCM - Model Requirements
2006-11-01
triangles (.raw) to the native triangular facet file (.facet). The software vendors recommend the use of McNeil and Associates’ Rhinoceros 3D for all...surface modeling and export. Rhinoceros has the capability and precision to create highly detailed 3D surface geometry suitable for radar cross section... white before ending up at blue as the temperature increases [27]. IR radiation was discovered in 1800 but its application is still limited in
Optimization models using fuzzy sets and possibility theory
Orlovski, S
1987-01-01
Optimization is of central concern to a number of discip lines. Operations Research and Decision Theory are often consi dered to be identical with optimizationo But also in other areas such as engineering design, regional policy, logistics and many others, the search for optimal solutions is one of the prime goals. The methods and models which have been used over the last decades in these areas have primarily been "hard" or "crisp", i. e. the solutions were considered to be either fea sible or unfeasible, either above a certain aspiration level or below. This dichotomous structure of methods very often forced the modeller to approximate real problem situations of the more-or-less type by yes-or-no-type models, the solutions of which might turn out not to be the solutions to the real prob lems. This is particularly true if the problem under considera tion includes vaguely defined relationships, human evaluations, uncertainty due to inconsistent or incomplete evidence, if na tural language has to be...
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
High resolution tsunami modelling for the evaluation of potential risk areas in Setúbal (Portugal
Directory of Open Access Journals (Sweden)
J. Ribeiro
2011-08-01
Full Text Available The use of high resolution hydrodynamic modelling to simulate the potential effects of tsunami events can provide relevant information about the most probable inundation areas. Moreover, the consideration of complementary data such as the type of buildings, location of priority equipment, type of roads, enables mapping of the most vulnerable zones, computing of the expected damage on man-made structures, constrain of the definition of rescue areas and escape routes, adaptation of emergency plans and proper evaluation of the vulnerability associated with different areas and/or equipment.
Such an approach was used to evaluate the specific risks associated with a potential occurrence of a tsunami event in the region of Setúbal (Portugal, which was one of the areas most seriously affected by the 1755 tsunami.
In order to perform an evaluation of the hazard associated with the occurrence of a similar event, high resolution wave propagation simulations were performed considering different potential earthquake sources with different magnitudes. Based on these simulations, detailed inundation maps associated with the different events were produced. These results were combined with the available information on the vulnerability of the local infrastructures (building types, roads and streets characteristics, priority buildings in order to impose restrictions in the production of high-scale potential damage maps, escape routes and emergency routes maps.
Modeling Non-linear Ocean Wave Amplification in Coastal Settings
Harrington, J. P.; Cox, R.; Brennan, J.; Clancy, C.; Herterich, J.; Dias, F.
2016-12-01
Coastal boulder deposits occur in many locations worldwide, along high-energy coastlines. They contain clasts with masses >100 t in some cases, deposited many m above high water and many tens of m inland, often at the top of steep cliffs. The clasts are moved by storm waves, despite being at elevations and inshore distances that should be unreachable by recorded sea states. The question is, therefore, how are storm waves amplified to the extent needed to transport megagravel inshore? As climate changes, with the risk of increased storminess, it is important to understand this issue, as it is central to understanding inland transmission of fluid forces during storm events. Numerical modeling is a powerful technique for exploring this complex problem. We used a conformal mapping solution to Euler's equations to explore runup of 2D wave trains against a vertical barrier (simulating a coastal cliff). Previous research showed that modeled wave trains passing over flat bathymetry experience vertical runup up to 6 times the initial wave amplitude for both short- (3 times water depth) and long- (125 times depth) wavelength waves. We increased the model complexity by including a bathymetric step, causing an abrupt depth decrease before the cliff. We found that the uneven bathymetry further amplified both short- and long-wavelength waves. Short-wavelength simulations were hampered by our code's limitations in solving Euler's equations for steep waves, and crashed before reaching maximum runups: ongoing work focuses on solving the computational problems. These problems did not affect the long-wavelength simulations, however, which returned maximum runup values up to 10 times initial amplitude. The key message is that bathymetric effects can drive large wave-height amplifications. This suggests that enhanced runup for long-wavelength waves caused by variable bathymetry could be a key factor in cases where ocean waves overtop steep cliffs and transport boulders well above high
Magnetic field modeling with a set of individual localized coils.
Juchem, Christoph; Nixon, Terence W; McIntyre, Scott; Rothman, Douglas L; de Graaf, Robin A
2010-06-01
A set of generic, circular individual coils is shown to be capable of generating highly complex magnetic field distributions in a flexible fashion. Arbitrarily oriented linear field gradients can be generated in three-dimensional as well as sliced volumes at amplitudes that allow imaging applications. The multi-coil approach permits the simultaneous generation of linear MRI encoding fields and complex shim fields by the same setup, thereby reducing system complexity. The choice of the sensitive volume over which the magnetic fields are optimized remains temporally and spatially variable at all times. The restriction of the field synthesis to experimentally relevant, smaller volumes such as single slices directly translates into improved efficiency, i.e. higher magnetic field amplitudes and/or reduced coil currents. For applications like arterial spin labeling, signal spoiling and diffusion weighting, perfect linearity of the gradient fields is not required and reduced demands on accuracy can also be readily translated into improved efficiency. The first experimental realization was achieved for mouse head MRI with 24 coils that were mounted on the surface of a cylindrical former. Oblique linear field gradients of 20 kHz/cm (47 mT/m) were generated with a maximum current of 1.4A which allowed radial imaging of a mouse head. The potential of the new approach for generating arbitrary magnetic field shapes is demonstrated by synthesizing the more complex, higher order spherical harmonic magnetic field distributions X2-Y2, Z2 and Z2X. The new multi-coil approach provides the framework for the integration of conventional imaging and shim coils into a single multi-coil system in which shape, strength, accuracy and spatial coverage of the magnetic field can be specifically optimized for the application at hand. (c) 2010 Elsevier Inc. All rights reserved.
Implicit level set algorithms for modelling hydraulic fracture propagation.
Peirce, A
2016-10-13
Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).
International Nuclear Information System (INIS)
Tierney, M.S.
1990-12-01
A five-step procedure was used in the 1990 performance simulations to construct probability distributions of the uncertain variables appearing in the mathematical models used to simulate the Waste Isolation Pilot Plant's (WIPP's) performance. This procedure provides a consistent approach to the construction of probability distributions in cases where empirical data concerning a variable are sparse or absent and minimizes the amount of spurious information that is often introduced into a distribution by assumptions of nonspecialists. The procedure gives first priority to the professional judgment of subject-matter experts and emphasizes the use of site-specific empirical data for the construction of the probability distributions when such data are available. In the absence of sufficient empirical data, the procedure employs the Maximum Entropy Formalism and the subject-matter experts' subjective estimates of the parameters of the distribution to construct a distribution that can be used in a performance simulation. (author)
Level set methods for modelling field evaporation in atom probe.
Haley, Daniel; Moody, Michael P; Smith, George D W
2013-12-01
Atom probe is a nanoscale technique for creating three-dimensional spatially and chemically resolved point datasets, primarily of metallic or semiconductor materials. While atom probe can achieve local high-level resolution, the spatial coherence of the technique is highly dependent upon the evaporative physics in the material and can often result in large geometric distortions in experimental results. The distortions originate from uncertainties in the projection function between the field evaporating specimen and the ion detector. Here we explore the possibility of continuum numerical approximations to the evaporative behavior during an atom probe experiment, and the subsequent propagation of ions to the detector, with particular emphasis placed on the solution of axisymmetric systems, such as isolated particles and multilayer systems. Ultimately, this method may prove critical in rapid modeling of tip shape evolution in atom probe tomography, which itself is a key factor in the rapid generation of spatially accurate reconstructions in atom probe datasets.
DEFF Research Database (Denmark)
Jepsen, J. U.; Baveco, J. M.; Topping, C. J.
2004-01-01
preferences of the modeller, rather than by a critical evaluation of model performance. We present a comparison of three common spatial simulation approaches (patch-based incidence-function model (IFM), individual-based movement model (IBMM), individual-based population model including detailed behaviour...
Evolution of Autocatalytic Sets in Computational Models of Chemical Reaction Networks.
Hordijk, Wim
2016-06-01
Several computational models of chemical reaction networks have been presented in the literature in the past, showing the appearance and (potential) evolution of autocatalytic sets. However, the notion of autocatalytic sets has been defined differently in different modeling contexts, each one having some shortcoming or limitation. Here, we review four such models and definitions, and then formally describe and analyze them in the context of a mathematical framework for studying autocatalytic sets known as RAF theory. The main results are that: (1) RAF theory can capture the various previous definitions of autocatalytic sets and is therefore more complete and general, (2) the formal framework can be used to efficiently detect and analyze autocatalytic sets in all of these different computational models, (3) autocatalytic (RAF) sets are indeed likely to appear and evolve in such models, and (4) this could have important implications for a possible metabolism-first scenario for the origin of life.
Expected utility with lower probabilities
DEFF Research Database (Denmark)
Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte
1994-01-01
An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...
Gould, A R
1977-01-01
The effects of temperature on the cell cycle of Haplopappus gracilis suspension cultures were analysed by the fraction of labelled mitoses method. Sphase in these cultures shows a different temperature optimum as compared to optima derived for G2 and mitosis. G1 phase has a much lower Q10 than the other cell cycle phases and shows no temperature optimum between 22 and 34° C. These results are discussed in relation to a transition probability model of the cell cycle proposed by Smith and Martin (Proc. Natl. Acad. Sci. USA 70, 1263-1267, 1973), in which each cell has a time independent probability of initiating the transition into another round of DNA replication and division. The implications of such a model for cell cycle analysis are discussed and a tentative model for a probabilistic transition trigger is advanced.
Quantum probability measures and tomographic probability densities
Amosov, GG; Man'ko, [No Value
2004-01-01
Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the
Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model
Kuznetsov, A. V.; Makaryants, G. M.
2018-01-01
There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.
The null hypothesis of GSEA, and a novel statistical model for competitive gene set analysis
DEFF Research Database (Denmark)
Debrabant, Birgit
2017-01-01
. This is a major handicap to the interpretation of results obtained from a gene set analysis. RESULTS: This work presents a hierarchical statistical model based on the notion of dependence measures, which overcomes this problem. The two levels of the model naturally reflect the modular structure of many gene set......MOTIVATION: Competitive gene set analysis intends to assess whether a specific set of genes is more associated with a trait than the remaining genes. However, the statistical models assumed to date to underly these methods do not enable a clear cut formulation of the competitive null hypothesis...
Wang, X; Xu, Y H; Du, Z Y; Qian, Y J; Xu, Z H; Chen, R; Shi, M H
2018-02-23
Objective: This study aims to analyze the relationship among the clinical features, radiologic characteristics and pathological diagnosis in patients with solitary pulmonary nodules, and establish a prediction model for the probability of malignancy. Methods: Clinical data of 372 patients with solitary pulmonary nodules who underwent surgical resection with definite postoperative pathological diagnosis were retrospectively analyzed. In these cases, we collected clinical and radiologic features including gender, age, smoking history, history of tumor, family history of cancer, the location of lesion, ground-glass opacity, maximum diameter, calcification, vessel convergence sign, vacuole sign, pleural indentation, speculation and lobulation. The cases were divided to modeling group (268 cases) and validation group (104 cases). A new prediction model was established by logistic regression analying the data from modeling group. Then the data of validation group was planned to validate the efficiency of the new model, and was compared with three classical models(Mayo model, VA model and LiYun model). With the calculated probability values for each model from validation group, SPSS 22.0 was used to draw the receiver operating characteristic curve, to assess the predictive value of this new model. Results: 112 benign SPNs and 156 malignant SPNs were included in modeling group. Multivariable logistic regression analysis showed that gender, age, history of tumor, ground -glass opacity, maximum diameter, and speculation were independent predictors of malignancy in patients with SPN( P prediction model for the probability of malignancy as follow: p =e(x)/(1+ e(x)), x=-4.8029-0.743×gender+ 0.057×age+ 1.306×history of tumor+ 1.305×ground-glass opacity+ 0.051×maximum diameter+ 1.043×speculation. When the data of validation group was added to the four-mathematical prediction model, The area under the curve of our mathematical prediction model was 0.742, which is greater
Identification of probabilities.
Vitányi, Paul M B; Chater, Nick
2017-02-01
Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterized by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Löf (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyze the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology.
Heterogeneity in Wage Setting Behavior in a New-Keynesian Model
Eijffinger, S.C.W.; Grajales Olarte, A.; Uras, R.B.
2015-01-01
In this paper we estimate a New-Keynesian DSGE model with heterogeneity in price and wage setting behavior. In a recent study, Coibion and Gorodnichenko (2011) develop a DSGE model, in which firms follow four different types of price setting schemes: sticky prices, sticky information, rule of thumb,
Directory of Open Access Journals (Sweden)
Pieter W. G. Bots
2011-06-01
Full Text Available When hydrological models are used in support of water management decisions, stakeholders often contest these models because they perceive certain aspects to be inadequately addressed. A strongly contested model may be abandoned completely, even when stakeholders could potentially agree on the validity of part of the information it can produce. The development of a new model is costly, and the results may be contested again. We consider how existing hydrological models can be used in a policy process so as to benefit from both hydrological knowledge and the perspectives and local knowledge of stakeholders. We define a code of conduct as a set of "rules of the game" that we base on a case study of developing a water management plan for a Natura 2000 site in the Netherlands. We propose general rules for agenda management and information sharing, and more specific rules for model use and option development. These rules structure the interactions among actors, help them to explicitly acknowledge uncertainties, and prevent expertise from being neglected or overlooked. We designed the rules to favor openness, protection of core stakeholder values, the use of relevant substantive knowledge, and the momentum of the process. We expect that these rules, although developed on the basis of a water-management issue, can also be applied to support the use of existing computer models in other policy domains. As rules will shape actions only when they are constantly affirmed by actors, we expect that the rules will become less useful in an "unruly" social environment where stakeholders constantly challenge the proceedings.
Meta-analysis of choice set generation effects on route choice model estimates and predictions
DEFF Research Database (Denmark)
Prato, Carlo Giacomo
2012-01-01
are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...
An analysis of a joint shear model for jointed media with orthogonal joint sets
International Nuclear Information System (INIS)
Koteras, J.R.
1991-10-01
This report describes a joint shear model used in conjunction with a computational model for jointed media with orthogonal joint sets. The joint shear model allows nonlinear behavior for both joint sets. Because nonlinear behavior is allowed for both joint sets, a great many cases must be considered to fully describe the joint shear behavior of the jointed medium. An extensive set of equations is required to describe the joint shear stress and slip displacements that can occur for all the various cases. This report examines possible methods for simplifying this set of equations so that the model can be implemented efficiently form a computational standpoint. The shear model must be examined carefully to obtain a computationally efficient implementation that does not lead to numerical problems. The application to fractures in rock is discussed. 5 refs., 4 figs
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
Two-phase electro-hydrodynamic flow modeling by a conservative level set model.
Lin, Yuan
2013-03-01
The principles of electro-hydrodynamic (EHD) flow have been known for more than a century and have been adopted for various industrial applications, for example, fluid mixing and demixing. Analytical solutions of such EHD flow only exist in a limited number of scenarios, for example, predicting a small deformation of a single droplet in a uniform electric field. Numerical modeling of such phenomena can provide significant insights about EHDs multiphase flows. During the last decade, many numerical results have been reported to provide novel and useful tools of studying the multiphase EHD flow. Based on a conservative level set method, the proposed model is able to simulate large deformations of a droplet by a steady electric field, which is beyond the region of theoretic prediction. The model is validated for both leaky dielectrics and perfect dielectrics, and is found to be in excellent agreement with existing analytical solutions and numerical studies in the literature. Furthermore, simulations of the deformation of a water droplet in decyl alcohol in a steady electric field match better with published experimental data than the theoretical prediction for large deformations. Therefore the proposed model can serve as a practical and accurate tool for simulating two-phase EHD flow. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Aldars-García, Laila; Ramos, Antonio J; Sanchis, Vicente; Marín, Sonia
2015-10-01
Human exposure to aflatoxins in foods is of great concern. The aim of this work was to use predictive mycology as a strategy to mitigate the aflatoxin burden in pistachio nuts postharvest. The probability of growth and aflatoxin B1 (AFB1) production of aflatoxigenic Aspergillus flavus, isolated from pistachio nuts, under static and non-isothermal conditions was studied. Four theoretical temperature scenarios, including temperature levels observed in pistachio nuts during shipping and storage, were used. Two types of inoculum were included: a cocktail of 25 A. flavus isolates and a single isolate inoculum. Initial water activity was adjusted to 0.87. Logistic models, with temperature and time as explanatory variables, were fitted to the probability of growth and AFB1 production under a constant temperature. Subsequently, they were used to predict probabilities under non-isothermal scenarios, with levels of concordance from 90 to 100% in most of the cases. Furthermore, the presence of AFB1 in pistachio nuts could be correctly predicted in 70-81 % of the cases from a growth model developed in pistachio nuts, and in 67-81% of the cases from an AFB1 model developed in pistachio agar. The information obtained in the present work could be used by producers and processors to predict the time for AFB1 production by A. flavus on pistachio nuts during transport and storage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Oldřich Coufal
2014-01-01
Full Text Available Objectives. The aim of the study was to develop a clinical prediction model for assessing the probability of having invasive cancer in the definitive surgical resection specimen in patients with biopsy diagnosis of ductal carcinoma in situ (DCIS of the breast, to facilitate decision making regarding axillary surgery. Methods. In 349 women with DCIS, predictors of invasion in the definitive resection specimen were identified. A model to predict the probability of invasion was developed and subsequently simplified to divide patients into two risk categories. The model’s performance was validated on another patient population. Results. Multivariate logistic regression revealed four independent predictors of invasion: (i suspicious (microinvasion in the biopsy specimen; (ii visibility of the lesion on ultrasonography; (iii size of the lesion on mammography >30 mm; (iv clinical palpability of the lesion. The actual frequency of invasion in the high-risk patient group in the test and validation population was 52.6% and 48.3%, respectively; in the low-risk group it was 16.8% and 7.1%, respectively. Conclusion. The model proved to have good performance. In patients with a low probability of invasion, an axillary procedure can be omitted without a substantial risk of additional surgery.
Liu, Qingyi; Jiang, Mingyan; Bai, Peirui; Yang, Guang
2016-03-01
In this paper, a level set model without the need of generating initial contour and setting controlling parameters manually is proposed for medical image segmentation. The contribution of this paper is mainly manifested in three points. First, we propose a novel adaptive mean shift clustering method based on global image information to guide the evolution of level set. By simple threshold processing, the results of mean shift clustering can automatically and speedily generate an initial contour of level set evolution. Second, we devise several new functions to estimate the controlling parameters of the level set evolution based on the clustering results and image characteristics. Third, the reaction diffusion method is adopted to supersede the distance regularization term of RSF-level set model, which can improve the accuracy and speed of segmentation effectively with less manual intervention. Experimental results demonstrate the performance and efficiency of the proposed model for medical image segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
A suitable model plant for control of the set fuel cell-DC/DC converter
Energy Technology Data Exchange (ETDEWEB)
Andujar, J.M.; Segura, F.; Vasallo, M.J. [Departamento de Ingenieria Electronica, Sistemas Informaticos y Automatica, E.P.S. La Rabida, Universidad de Huelva, Ctra. Huelva - Palos de la Frontera, S/N, 21819 La Rabida - Palos de la Frontera Huelva (Spain)
2008-04-15
In this work a state and transfer function model of the set made up of a proton exchange membrane (PEM) fuel cell and a DC/DC converter is developed. The set is modelled as a plant controlled by the converter duty cycle. In addition to allow setting the plant operating point at any point of its characteristic curve (two interesting points are maximum efficiency and maximum power points), this approach also allows the connection of the fuel cell to other energy generation and storage devices, given that, as they all usually share a single DC bus, a thorough control of the interconnected devices is required. First, the state and transfer function models of the fuel cell and the converter are obtained. Then, both models are related in order to achieve the fuel cell+DC/DC converter set (plant) model. The results of the theoretical developments are validated by simulation on a real fuel cell model. (author)
Romañach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.
2014-01-01
Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.
Thiessen, Erik D
2017-01-05
Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik
Rank Set Sampling in Improving the Estimates of Simple Regression Model
Directory of Open Access Journals (Sweden)
M Iqbal Jeelani
2015-04-01
Full Text Available In this paper Rank set sampling (RSS is introduced with a view of increasing the efficiency of estimates of Simple regression model. Regression model is considered with respect to samples taken from sampling techniques like Simple random sampling (SRS, Systematic sampling (SYS and Rank set sampling (RSS. It is found that R2 and Adj R2 obtained from regression model based on Rank set sample is higher than rest of two sampling schemes. Similarly Root mean square error, p-values, coefficient of variation are much lower in Rank set based regression model, also under validation technique (Jackknifing there is consistency in the measure of R2, Adj R2 and RMSE in case of RSS as compared to SRS and SYS. Results are supported with an empirical study involving a real data set generated of Pinus Wallichiana taken from block Langate of district Kupwara.
Energy Technology Data Exchange (ETDEWEB)
Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division; Colby, Jonathan [Verdant Power, Inc., New York, NY (United States); Adonizio, Mary Ann [Verdant Power, Inc., New York, NY (United States); Tomichek, Christine [Kleinschmidt Associates, Pittsfield, ME (United States); Scherelis, Constantin C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division
2016-07-31
One of the most important biological questions facing the marine and hydrokinetic (MHK) energy industry is whether fish and marine mammals that encounter MHK devices are likely to be struck by moving components. For hydrokinetic (HK) devices, i.e., those that generate energy from flowing water, this concern is greatest for large organisms because their increased length increases the probability that they will be struck as they pass through the area of blade sweep and because their increased mass means that the force absorbed if struck is greater and potentially more damaging (Amaral et al. 2015). Key to answering this question is understanding whether aquatic organisms change their swimming behavior as they encounter a device in a way that decreases their likelihood of being struck and possibly injured by the device. Whether near-field or far-field behavior results in general avoidance of or attraction to HK devices is a significant factor in the possible risk of physical contact with rotating turbine blades (Cada and Bevelhimer 2011).
Jenkins, P; Poulos, P G; Cole, M B; Vandeven, M H; Legan, J D
2000-02-01
Models to predict days to growth and probability of growth of Zygosaccharomyces bailii in high-acid foods were developed, and the equations are presented here. The models were constructed from measurements of growth of Z. bailii using automated turbidimetry over a 29-day period at various pH, NaCl, fructose, and acetic acid levels. Statistical analyses were carried out using Statistical Analysis Systems LIFEREG procedures, and the data were fitted to log-logistic models. Model 1 predicts days to growth based on two factors, combined molar concentration of salt plus sugar and undissociated acetic acid. This model allows a growth/no-growth boundary to be visualized. The boundary is comparable with that established by G. Tuynenburg Muys (Process Biochem. 6:25-28, 1971), which still forms the basis of industry assumptions about the stability of acidic foods. Model 2 predicts days to growth based on the four independent factors of salt, sugar, acetic acid, and pH levels and is, therefore, much more useful for product development. Validation data derived from challenge studies in retail products from the U.S. market are presented for Model 2, showing that the model gives reliable, fail-safe predictions and is suitable for use in predicting growth responses of Z. bailii in high-acid foods. Model 3 predicts probability of growth of Z. bailii in 29 days. This model is most useful for spoilage risk assessment. All three models showed good agreement between predictions and observed values for the underlying data.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
International Nuclear Information System (INIS)
Kase, Yuki; Matsufuji, Naruhiro; Furusawa, Yoshiya; Kanai, Tatsuaki
2007-01-01
In a treatment planning of heavy-ion radiotherapy, it is necessary to estimate the biological effect of the heavy-ion beams. Physical dose should be associated with the relative biological effectiveness (RBE) at each point. Presently, carbon ion radiotherapy has been carried out at the National Institute Radiological Sciences (NIRS) in Japan and the Gesellschaft fuer Schwerionenforschung mbH (GSI) in Germany. Both facilities take individual approach for the calculation of the RBE value. At NIRS, the classical LQ model has been used while the local effect model (LEM) has been incorporated into the treatment planning system at GSI. The first aim of this study is to explain the RBE model of NIRS by the microdosimetric kinetic model (MKM). In addition, the clarification of similarities and differences between the MKM and the LEM was also investigated. (author)
Probability mapping of contaminants
International Nuclear Information System (INIS)
Rautman, C.A.; Kaplan, P.G.; McGraw, M.A.; Istok, J.D.; Sigda, J.M.
1994-01-01
Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds)
Valero, Antonio; Pasquali, Frédérique; De Cesare, Alessandra; Manfreda, Gerardo
2014-08-01
Current sampling plans assume a random distribution of microorganisms in food. However, food-borne pathogens are estimated to be heterogeneously distributed in powdered foods. This spatial distribution together with very low level of contaminations raises concern of the efficiency of current sampling plans for the detection of food-borne pathogens like Cronobacter and Salmonella in powdered foods such as powdered infant formula or powdered eggs. An alternative approach based on a Poisson distribution of the contaminated part of the lot (Habraken approach) was used in order to evaluate the probability of falsely accepting a contaminated lot of powdered food when different sampling strategies were simulated considering variables such as lot size, sample size, microbial concentration in the contaminated part of the lot and proportion of contaminated lot. The simulated results suggest that a sample size of 100g or more corresponds to the lower number of samples to be tested in comparison with sample sizes of 10 or 1g. Moreover, the number of samples to be tested greatly decrease if the microbial concentration is 1CFU/g instead of 0.1CFU/g or if the proportion of contamination is 0.05 instead of 0.01. Mean contaminations higher than 1CFU/g or proportions higher than 0.05 did not impact on the number of samples. The Habraken approach represents a useful tool for risk management in order to design a fit-for-purpose sampling plan for the detection of low levels of food-borne pathogens in heterogeneously contaminated powdered food. However, it must be outlined that although effective in detecting pathogens, these sampling plans are difficult to be applied since the huge number of samples that needs to be tested. Sampling does not seem an effective measure to control pathogens in powdered food. Copyright © 2014 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Wu, Xiaojing; Zhang, Yidi; Pei, Zengyang; Chen, Si; Yang, Xu; Chen, Yin; Lin, Degui; Ma, Runlin Z
2012-01-01
Angiopoietin-2 (Ang-2) plays critical roles in vascular morphogenesis and its upregulation is frequently associated with various tumors. Previous studies showed that certain selenium compounds possess anti-tumor effects. However, the underlining mechanism has not been elucidated in detail. Plus, results of research on the anti-tumor effects of selenium compounds remain controversial. We investigated levels of Ang-2 and vascular endothelial growth factor (VEGF) on the estrogen-independent bone metastatic mammary cancer (MDA-MB-231) cells in response to treatment by methylseleninic acid (MSeA), and further examined the effects of MSeA oral administration on xenograft mammary tumors of athymic nude mice by RT-PCR, Western, radioimmuno assay, and Immunohistochemistry. Treatment of MDA-MB-231 cells with MSeA caused significant reduction of Ang-2 mRNA transcripts and secretion of Ang-2 proteins by the cells. Level of VEGF protein was accordingly decreased following the treatment. Compared with the controls, oral administration of MSeA (3 mg/kg/day for 18 days) to the nude mice carrying MDA-MB-231 induced tumors resulted in significant reduction in xenograft tumor volume and weights, significant decrease in microvascular density, and promotion of vascular normalization by increasing pericytes coverage. As expected, level of VEGF was also decreased in MSeA treated tumors. Our results point out that MSeA exerts its anti-tumor effects, at least in part, by inhibiting the Ang-2/Tie2 pathway, probably via inhibiting VEGF
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Thomas, A M; Cook, L J; Dean, J M; Olson, L M
2014-01-01
To compare results from high probability matched sets versus imputed matched sets across differing levels of linkage information. A series of linkages with varying amounts of available information were performed on two simulated datasets derived from multiyear motor vehicle crash (MVC) and hospital databases, where true matches were known. Distributions of high probability and imputed matched sets were compared against the true match population for occupant age, MVC county, and MVC hour. Regression models were fit to simulated log hospital charges and hospitalization status. High probability and imputed matched sets were not significantly different from occupant age, MVC county, and MVC hour in high information settings (p > 0.999). In low information settings, high probability matched sets were significantly different from occupant age and MVC county (p sets were not (p > 0.493). High information settings saw no significant differences in inference of simulated log hospital charges and hospitalization status between the two methods. High probability and imputed matched sets were significantly different from the outcomes in low information settings; however, imputed matched sets were more robust. The level of information available to a linkage is an important consideration. High probability matched sets are suitable for high to moderate information settings and for situations involving case-specific analysis. Conversely, imputed matched sets are preferable for low information settings when conducting population-based analyses.
Directory of Open Access Journals (Sweden)
Michael L Mann
Full Text Available The costly interactions between humans and wildfires throughout California demonstrate the need to understand the relationships between them, especially in the face of a changing climate and expanding human communities. Although a number of statistical and process-based wildfire models exist for California, there is enormous uncertainty about the location and number of future fires, with previously published estimates of increases ranging from nine to fifty-three percent by the end of the century. Our goal is to assess the role of climate and anthropogenic influences on the state's fire regimes from 1975 to 2050. We develop an empirical model that integrates estimates of biophysical indicators relevant to plant communities and anthropogenic influences at each forecast time step. Historically, we find that anthropogenic influences account for up to fifty percent of explanatory power in the model. We also find that the total area burned is likely to increase, with burned area expected to increase by 2.2 and 5.0 percent by 2050 under climatic bookends (PCM and GFDL climate models, respectively. Our two climate models show considerable agreement, but due to potential shifts in rainfall patterns, substantial uncertainty remains for the semiarid inland deserts and coastal areas of the south. Given the strength of human-related variables in some regions, however, it is clear that comprehensive projections of future fire activity should include both anthropogenic and biophysical influences. Previous findings of substantially increased numbers of fires and burned area for California may be tied to omitted variable bias from the exclusion of human influences. The omission of anthropogenic variables in our model would overstate the importance of climatic ones by at least 24%. As such, the failure to include anthropogenic effects in many models likely overstates the response of wildfire to climatic change.
Mann, Michael L; Batllori, Enric; Moritz, Max A; Waller, Eric K; Berck, Peter; Flint, Alan L; Flint, Lorraine E; Dolfi, Emmalee
2016-01-01
The costly interactions between humans and wildfires throughout California demonstrate the need to understand the relationships between them, especially in the face of a changing climate and expanding human communities. Although a number of statistical and process-based wildfire models exist for California, there is enormous uncertainty about the location and number of future fires, with previously published estimates of increases ranging from nine to fifty-three percent by the end of the century. Our goal is to assess the role of climate and anthropogenic influences on the state's fire regimes from 1975 to 2050. We develop an empirical model that integrates estimates of biophysical indicators relevant to plant communities and anthropogenic influences at each forecast time step. Historically, we find that anthropogenic influences account for up to fifty percent of explanatory power in the model. We also find that the total area burned is likely to increase, with burned area expected to increase by 2.2 and 5.0 percent by 2050 under climatic bookends (PCM and GFDL climate models, respectively). Our two climate models show considerable agreement, but due to potential shifts in rainfall patterns, substantial uncertainty remains for the semiarid inland deserts and coastal areas of the south. Given the strength of human-related variables in some regions, however, it is clear that comprehensive projections of future fire activity should include both anthropogenic and biophysical influences. Previous findings of substantially increased numbers of fires and burned area for California may be tied to omitted variable bias from the exclusion of human influences. The omission of anthropogenic variables in our model would overstate the importance of climatic ones by at least 24%. As such, the failure to include anthropogenic effects in many models likely overstates the response of wildfire to climatic change.
The GRENE-TEA model intercomparison project (GTMIP) Stage 1 forcing data set
Sueyoshi, T.; Saito, K.; Miyazaki, S.; Mori, J.; Ise, T.; Arakida, H.; Suzuki, R.; Sato, A.; Iijima, Y.; Yabuki, H.; Ikawa, H.; Ohta, T.; Kotani, A.; Hajima, T.; Sato, H.; Yamazaki, T.; Sugimoto, A.
2016-01-01
Here, the authors describe the construction of a forcing data set for land surface models (including both physical and biogeochemical models; LSMs) with eight meteorological variables for the 35-year period from 1979 to 2013. The data set is intended for use in a model intercomparison study, called GTMIP, which is a part of the Japanese-funded Arctic Climate Change Research Project. In order to prepare a set of site-fitted forcing data for LSMs with realistic yet continuous entries (i.e. without missing data), four observational sites across the pan-Arctic region (Fairbanks, Tiksi, Yakutsk, and Kevo) were selected to construct a blended data set using both global reanalysis and observational data. Marked improvements were found in the diurnal cycles of surface air temperature and humidity, wind speed, and precipitation. The data sets and participation in GTMIP are open to the scientific community (doi:10.17592/001.2015093001).
Simulation of random set models for unions of discs and the use of power tessellations
DEFF Research Database (Denmark)
Møller, Jesper; Helisová, Katerina
The power tessellation (or power diagram or Laguerre diagram) turns out to be particular useful in connection to a certain class of stochastic models for a disc process used for generating a random set model. We discuss how to simulate these models and calculate various characteristics of power t...
Philosophical theories of probability
Gillies, Donald
2000-01-01
The Twentieth Century has seen a dramatic rise in the use of probability and statistics in almost all fields of research. This has stimulated many new philosophical ideas on probability. Philosophical Theories of Probability is the first book to present a clear, comprehensive and systematic account of these various theories and to explain how they relate to one another. Gillies also offers a distinctive version of the propensity theory of probability, and the intersubjective interpretation, which develops the subjective theory.
Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines
Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.
2011-01-01
Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433
Probability machines: consistent probability estimation using nonparametric learning machines.
Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A
2012-01-01
Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.
Raymond, G. M.; Bassingthwaighte, J. B.
2016-01-01
This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a “consilience” of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (Km = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave Km = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated Km = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model
Raymond, G M; Bassingthwaighte, J B
This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model
Mourik, van S.; Braak, ter C.J.F.; Stigter, J.D.; Molenaar, J.
2014-01-01
Multi-parameter models in systems biology are typically ‘sloppy’: some parameters or combinations of parameters may be hard to estimate from data, whereas others are not. One might expect that parameter uncertainty automatically leads to uncertain predictions, but this is not the case. We illustrate
Algebraic Specifications, Higher-order Types and Set-theoretic Models
DEFF Research Database (Denmark)
Kirchner, Hélène; Mosses, Peter David
2001-01-01
, and power-sets. This paper presents a simple framework for algebraic specifications with higher-order types and set-theoretic models. It may be regarded as the basis for a Horn-clause approximation to the Z framework, and has the advantage of being amenable to prototyping and automated reasoning. Standard...
Optimal Interest-Rate Setting in a Dynamic IS/AS Model
DEFF Research Database (Denmark)
Jensen, Henrik
2011-01-01
This note deals with interest-rate setting in a simple dynamic macroeconomic setting. The purpose is to present some basic and central properties of an optimal interest-rate rule. The model framework predates the New-Keynesian paradigm of the late 1990s and onwards (it is accordingly dubbed “Old...
What Time Is Sunrise? Revisiting the Refraction Component of Sunrise/set Prediction Models
Wilson, Teresa; Bartlett, Jennifer L.; Hilton, James Lindsay
2017-01-01
Algorithms that predict sunrise and sunset times currently have an error of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, even including difficulties determining when the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction. We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We then compare these predictions with data sets of observed rise/set times to create a better model. Sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem. While there are a few data sets available, we will also begin collecting this data using smartphones as part of a citizen science project. The mobile application for this project will be available in the Google Play store. Data analysis will lead to more complete models that will provide more accurate rise/set times for the benefit of astronomers, navigators, and outdoorsmen everywhere.
USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS
A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...
Gersch, Alan M.; A’Hearn, Michael F.; Feaga, Lori M.
2018-04-01
We have applied our asymmetric spherical adaptation of Coupled Escape Probability to the modeling of optically thick cometary comae. Expanding on our previously published work, here we present models including asymmetric comae. Near-nucleus observations from the Deep Impact mission have been modeled, including observed coma morphology features. We present results for two primary volatile species of interest, H2O and CO2, for comet 9P/Tempel 1. Production rates calculated using our best-fit models are notably greater than those derived from the Deep Impact data based on the assumption of optically thin conditions, both for H2O and CO2 but more so for CO2, and fall between the Deep Impact values and the global pre-impact production rates measured at other observatories and published by Schleicher et al. (2006), Mumma et al. (2005), and Mäkinen et al. (2007).
Usefulness of the WHO C-Model to optimize the cesarean delivery rate in a tertiary hospital setting.
Abdel-Aleem, Hany; Darwish, Atef; Abdelaleem, Ahmad A; Mansur, Manal
2017-04-01
To assess use of the C-Model in a tertiary hospital setting in terms of its validity and utility for optimizing the cesarean delivery (CD) rate. A prospective observational study included women admitted for delivery at a university teaching hospital in Assiut, Egypt, in 2015. The women were asked about the demographic and obstetric information needed to calculate the probability of CD using the WHO C-Model. A receiver operating characteristic (ROC) curve comparing the predicted and observed CD rates was constructed. In addition, the mean predicted CD rates were compared with the mean observed CD rates in the 10 groups of the Robson classification. In total, 1000 women were recruited; 38.6% had a previous CD and 13.5% had complications during the current pregnancy. The final mode of delivery was vaginal delivery in 38.7% and CD in 61.3%; the predicted CD rate for this cohort was 45.0%. The area under the ROC curve was 0.928 (95% confidence interval 0.912-0.945). Comparison of the predicted and observed CD rates in the 10 Robson groups showed an overuse of CD ranging from 2% to 50%. The WHO C-Model is valid and can be used in hospital settings to optimize CD rates. © 2017 International Federation of Gynecology and Obstetrics.
International Nuclear Information System (INIS)
Wood, Thomas W.; Bredt, Ofelia P.; Heasler, Patrick G.; Reichmuth, Barbara A.; Milazzo, Matthew D.
2005-01-01
This paper develops simple frameworks for cost minimization of detector systems by trading off the costs of failed detection against the social costs of false alarms. A workable system must have a high degree of certainty in detecting real threats and yet impose low social costs. The models developed here use standard measures of detector performance and derive target detection probabilities and false-alarm tolerance specifications as functions of detector performance, threat traffic densities, and estimated costs
Energy Technology Data Exchange (ETDEWEB)
Wood, Thomas W.; Bredt, Ofelia P.; Heasler, Patrick G.; Reichmuth, Barbara A.; Milazzo, Matthew D.
2005-04-28
This paper develops simple frameworks for cost minimization of detector systems by trading off the costs of failed detection against the social costs of false alarms. A workable system must have a high degree of certainty in detecting real threats and yet impose low social costs. The models developed here use standard measures of detector performance and derive target detection probabilities and false-alarm tolerance specifications as functions of detector performance, threat traffic densities, and estimated costs.
Measure, integral and probability
Capiński, Marek
2004-01-01
Measure, Integral and Probability is a gentle introduction that makes measure and integration theory accessible to the average third-year undergraduate student. The ideas are developed at an easy pace in a form that is suitable for self-study, with an emphasis on clear explanations and concrete examples rather than abstract theory. For this second edition, the text has been thoroughly revised and expanded. New features include: · a substantial new chapter, featuring a constructive proof of the Radon-Nikodym theorem, an analysis of the structure of Lebesgue-Stieltjes measures, the Hahn-Jordan decomposition, and a brief introduction to martingales · key aspects of financial modelling, including the Black-Scholes formula, discussed briefly from a measure-theoretical perspective to help the reader understand the underlying mathematical framework. In addition, further exercises and examples are provided to encourage the reader to become directly involved with the material.
Description of a practice model for pharmacist medication review in a general practice setting
DEFF Research Database (Denmark)
Brandt, Mette; Hallas, Jesper; Hansen, Trine Graabæk
2014-01-01
BACKGROUND: Practical descriptions of procedures used for pharmacists' medication reviews are sparse. OBJECTIVE: To describe a model for medication review by pharmacists tailored to a general practice setting. METHODS: A stepwise model is described. The model is based on data from the medical chart...... no indication (n=47, 23%). Most interventions were aimed at cardiovascular drugs. CONCLUSION: We have provided a detailed description of a practical approach to pharmacists' medication review in a GP setting. The model was tested and found to be usable, and to deliver a medication review with high acceptance...
DEFF Research Database (Denmark)
Hinrichsen, H.H.; Schmidt, J.O.; Petereit, C.
2005-01-01
Temporal mismatch between the occurrence of larvae and their prey potentially affects the spatial overlap and thus the contact rates between predator and prey. This might have important consequences for growth and survival. We performed a case study investigating the influence of circulation......-prey overlap, dependent on the hatching time of cod larvae. By performing model runs for the years 1979-1998 investigated the intra- and interannual variability of potential spatial overlap between predator and prey. Assuming uniform prey distributions, we generally found the overlap to have decreased since...
Models for treating depression in specialty medical settings: a narrative review.
Breland, Jessica Y; Mignogna, Joseph; Kiefer, Lea; Marsh, Laura
2015-01-01
This review answered two questions: (a) what types of specialty medical settings are implementing models for treating depression, and (b) do models for treating depression in specialty medical settings effectively treat depression symptoms? We searched Medline/Pubmed to identify articles, published between January 1990 and May 2013, reporting on models for treating depression in specialty medical settings. Included studies had to have adult participants with comorbid medical conditions recruited from outpatient, nonstandard primary care settings. Studies also had to report specific, validated depression measures. Search methods identified nine studies (six randomized controlled trials, one nonrandomized controlled trial and two uncontrolled trials), all representing integrated care for depression, in three specialty settings (oncology, infectious disease, neurology). Most studies (N=7) reported greater reductions in depression among patients receiving integrated care compared to usual care, particularly in oncology clinics. Integrated care for depression in specialty medical settings can improve depression outcomes. Additional research is needed to understand the effectiveness of incorporating behavioral and/or psychological treatments into existing methods. When developing or selecting a model for treating depression in specialty medical settings, clinicians and researchers will benefit from choosing specific components and measures most relevant to their target populations. Published by Elsevier Inc.
Energy Technology Data Exchange (ETDEWEB)
Galiano, G.; Grau, A.
1994-07-01
An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electro capture in the counting efficiency when the atomic number of the nuclide is high. (Author)
Probability measures on metric spaces
Parthasarathy, K R
2005-01-01
In this book, the author gives a cohesive account of the theory of probability measures on complete metric spaces (which is viewed as an alternative approach to the general theory of stochastic processes). After a general description of the basics of topology on the set of measures, the author discusses regularity, tightness, and perfectness of measures, properties of sampling distributions, and metrizability and compactness theorems. Next, he describes arithmetic properties of probability measures on metric groups and locally compact abelian groups. Covered in detail are notions such as decom
Knowledge typology for imprecise probabilities.
Energy Technology Data Exchange (ETDEWEB)
Wilson, G. D. (Gregory D.); Zucker, L. J. (Lauren J.)
2002-01-01
When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.
Annambhotla, Pallavi D; Gurbaxani, Brian M; Kuehnert, Matthew J; Basavaraju, Sridhar V
2017-04-01
In 2013, guidelines were released for reducing the risk of viral bloodborne pathogen transmission through organ transplantation. Eleven criteria were described that result in a donor being designated at increased infectious risk. Human immunodeficiency virus (HIV) and hepatitis C virus (HCV) transmission risk from an increased-risk donor (IRD), despite negative nucleic acid testing (NAT), likely varies based on behavior type and timing. We developed a Monte Carlo risk model to quantify probability of HIV among IRDs. The model included NAT performance, viral load dynamics, and per-act risk of acquiring HIV by each behavior. The model also quantifies the probability of HCV among IRDs by non-medical intravenous drug use (IVDU). Highest risk is among donors with history of unprotected, receptive anal male-to-male intercourse with partner of unknown HIV status (MSM), followed by sex with an HIV-infected partner, IVDU, and sex with a commercial sex worker. With NAT screening, the estimated risk of undetected HIV remains small even at 1 day following a risk behavior. The estimated risk for HCV transmission through IVDU is likewise small and decreases quicker with time owing to the faster viral growth dynamics of HCV compared with HIV. These findings may allow for improved organ allocation, utilization, and recipient informed consent. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Probably not future prediction using probability and statistical inference
Dworsky, Lawrence N
2008-01-01
An engaging, entertaining, and informative introduction to probability and prediction in our everyday lives Although Probably Not deals with probability and statistics, it is not heavily mathematical and is not filled with complex derivations, proofs, and theoretical problem sets. This book unveils the world of statistics through questions such as what is known based upon the information at hand and what can be expected to happen. While learning essential concepts including "the confidence factor" and "random walks," readers will be entertained and intrigued as they move from chapter to chapter. Moreover, the author provides a foundation of basic principles to guide decision making in almost all facets of life including playing games, developing winning business strategies, and managing personal finances. Much of the book is organized around easy-to-follow examples that address common, everyday issues such as: How travel time is affected by congestion, driving speed, and traffic lights Why different gambling ...
Curreri, Peter A.
2010-01-01
Two contemporary issues foretell a shift from our historical Earth based industrial economy and habitation to a solar system based society. The first is the limits to Earth s carrying capacity, that is the maximum number of people that the Earth can support before a catastrophic impact to the health of the planet and human species occurs. The simple example of carrying capacity is that of a bacterial colony in a Petri dish with a limited amount of nutrient. The colony experiences exponential population growth until the carrying capacity is reached after which catastrophic depopulation often results. Estimates of the Earth s carrying capacity vary between 14 and 40 billion people. Although at current population growth rates we may have over a century before we reach Earth s carrying limit our influence on climate and resources on the planetary scale is becoming scientifically established. The second issue is the exponential growth of knowledge and technological power. The exponential growth of technology interacts with the exponential growth of population in a manner that is unique to a highly intelligent species. Thus, the predicted consequences (world famines etc.) of the limits to growth have been largely avoided due to technological advances. However, at the mid twentieth century a critical coincidence occurred in these two trends humanity obtained the technological ability to extinguish life on the planetary scale (by nuclear, chemical, biological means) and attained the ability to expand human life beyond Earth. This paper examines an optimized O'Neill/Glaser model (O Neill 1975; Curreri 2007; Detweiler and Curreri 2008) for the economic human population of space. Critical to this model is the utilization of extraterrestrial resources, solar power and spaced based labor. A simple statistical analysis is then performed which predicts the robustness of a single planet based technological society versus that of multiple world (independent habitats) society.
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation
Directory of Open Access Journals (Sweden)
Liming Tang
2014-01-01
Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.
Lozada Aguilar, Miguel Ángel; Khrennikov, Andrei; Oleschko, Klaudia
2018-04-28
As was recently shown by the authors, quantum probability theory can be used for the modelling of the process of decision-making (e.g. probabilistic risk analysis) for macroscopic geophysical structures such as hydrocarbon reservoirs. This approach can be considered as a geophysical realization of Hilbert's programme on axiomatization of statistical models in physics (the famous sixth Hilbert problem). In this conceptual paper , we continue development of this approach to decision-making under uncertainty which is generated by complexity, variability, heterogeneity, anisotropy, as well as the restrictions to accessibility of subsurface structures. The belief state of a geological expert about the potential of exploring a hydrocarbon reservoir is continuously updated by outputs of measurements, and selection of mathematical models and scales of numerical simulation. These outputs can be treated as signals from the information environment E The dynamics of the belief state can be modelled with the aid of the theory of open quantum systems: a quantum state (representing uncertainty in beliefs) is dynamically modified through coupling with E ; stabilization to a steady state determines a decision strategy. In this paper, the process of decision-making about hydrocarbon reservoirs (e.g. 'explore or not?'; 'open new well or not?'; 'contaminated by water or not?'; 'double or triple porosity medium?') is modelled by using the Gorini-Kossakowski-Sudarshan-Lindblad equation. In our model, this equation describes the evolution of experts' predictions about a geophysical structure. We proceed with the information approach to quantum theory and the subjective interpretation of quantum probabilities (due to quantum Bayesianism).This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).
"Economic microscope": The agent-based model set as an instrument in an economic system research
Berg, D. B.; Zvereva, O. M.; Akenov, Serik
2017-07-01
To create a valid model of a social or economic system one must consider a lot of parameters, conditions and restrictions. Systems and, consequently, the corresponding models are proved to be very complicated. The problem of such system model engineering can't be solved only with mathematical methods usage. The decision could be found in computer simulation. Simulation does not reject mathematical methods, mathematical expressions could become the foundation for a computer model. In these materials the set of agent-based computer models is under discussion. All the set models simulate productive agents communications, but every model is geared towards the specific goal, and, thus, has its own algorithm and its own peculiarities. It is shown that computer simulation can discover new features of the agents' behavior that can not be obtained by analytical solvation of mathematical equations and thus plays the role of some kind of economic microscope.
Predicting Cumulative Incidence Probability by Direct Binomial Regression
DEFF Research Database (Denmark)
Scheike, Thomas H.; Zhang, Mei-Jie
Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard......Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard...
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B
2017-05-01
Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Simulation of random set models for unions of discs and the use of power tessellations
DEFF Research Database (Denmark)
Møller, Jesper; Helisova, Katerina
2009-01-01
The power tessellation (or power diagram or Laguerre diagram) turns out to be particularly useful in connection to a flexible class of random set models specified by an underlying process of interacting discs. We discuss how to simulate these models and calculate various geometric characteristics...
A Model for Setting Performance Objectives for Salmonella in the Broiler Supply Chain
Tromp, S.O.; Franz, E.; Rijgersberg, H.; Asselt, van E.D.; Fels-Klerx, van der H.J.
2010-01-01
A stochastic model for setting performance objectives for Salmonella in the broiler supply chain was developed. The goal of this study was to develop a model by which performance objectives for Salmonella prevalence at various points in the production chain can be determined, based on a preset final
Westen, van G.J.P.; Swier, R.F.; Wegner, J.K.; IJzerman, A.P.; Vlijmen, van H.; Bender, A.
2013-01-01
Background While a large body of work exists on comparing and benchmarking of descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 different protein descriptor sets have been compared with respect to their behavior
Gonzales-Barron, Ursula; Redmond, Grainne; Butler, Francis
2010-08-01
Prevalence and counts of Salmonella Typhimurium in fresh pork sausage packs at the point of retail were modeled by using Irish and United Kingdom retail surveys' data. A methodology for modeling a second-order distribution for the initial Salmonella concentration (lambda0) in pork sausage at retail was presented considering the uncertainty originated from the most probable-number (MPN) serial dilutions. A conditional probability of observing the tube counts given true Salmonella concentration in a contaminated pack was built from the MPN triplets of every sausage tested. A posterior distribution was then modeled under the assumption that the counts from each of the portions of sausage mix stuffed into casings (and subsequently packed) are Poisson distributed. In order to model the variability of lambda0 among contaminated sausage packs, MPN uncertainties were propagated to a predefined lognormal distribution. Because the sausage samples from the Irish survey were frozen prior to MPN analysis (which is expected to cause reduction in viable cells), the resulting distribution for lambda0 appeared greatly underestimated (mean: 0.514 CFU/g; 95% confidence interval [CI]: 0.02 to 2.74 CFU/g). The lambda0 distribution produced with the United Kingdom survey data (mean: 69.7 CFU/g; 95% CI: 15 to 200 CFU/g) was, however, more conservative, and is to be used along with the fitted distribution for prevalence of Salmonella Typhimurium in pork sausage packs in Ireland (gamma[37.997, 0.0013]; mean: 0.046; 95% CI: 0.032 to 0.064) as the main inputs of a stochastic consumer-phase exposure assessment model.
The quantum probability calculus
International Nuclear Information System (INIS)
Jauch, J.M.
1976-01-01
The Wigner anomaly (1932) for the joint distribution of noncompatible observables is an indication that the classical probability calculus is not applicable for quantum probabilities. It should, therefore, be replaced by another, more general calculus, which is specifically adapted to quantal systems. In this article this calculus is exhibited and its mathematical axioms and the definitions of the basic concepts such as probability field, random variable, and expectation values are given. (B.R.H)
Energy Technology Data Exchange (ETDEWEB)
Ertaş, Mehmet, E-mail: mehmetertas@erciyes.edu.tr; Keskin, Mustafa
2015-03-01
By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume–Emery–Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory. - Highlights: • Dynamic magnetic behavior of the Blume–Emery–Griffiths system is investigated by using the path probability method. • The time variations of average magnetizations are studied to find the phases. • The temperature dependence of the dynamic magnetizations is investigated to obtain the dynamic phase transition points. • We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
Thompson, Bryony A; Goldgar, David E; Paterson, Carol; Clendenning, Mark; Walters, Rhiannon; Arnold, Sven; Parsons, Michael T; Michael D, Walsh; Gallinger, Steven; Haile, Robert W; Hopper, John L; Jenkins, Mark A; Lemarchand, Loic; Lindor, Noralane M; Newcomb, Polly A; Thibodeau, Stephen N; Young, Joanne P; Buchanan, Daniel D; Tavtigian, Sean V; Spurdle, Amanda B
2013-01-01
Mismatch repair (MMR) gene sequence variants of uncertain clinical significance are often identified in suspected Lynch syndrome families, and this constitutes a challenge for both researchers and clinicians. Multifactorial likelihood model approaches provide a quantitative measure of MMR variant pathogenicity, but first require input of likelihood ratios (LRs) for different MMR variation-associated characteristics from appropriate, well-characterized reference datasets. Microsatellite instability (MSI) and somatic BRAF tumor data for unselected colorectal cancer probands of known pathogenic variant status were used to derive LRs for tumor characteristics using the Colon Cancer Family Registry (CFR) resource. These tumor LRs were combined with variant segregation within families, and estimates of prior probability of pathogenicity based on sequence conservation and position, to analyze 44 unclassified variants identified initially in Australasian Colon CFR families. In addition, in vitro splicing analyses were conducted on the subset of variants based on bioinformatic splicing predictions. The LR in favor of pathogenicity was estimated to be ~12-fold for a colorectal tumor with a BRAF mutation-negative MSI-H phenotype. For 31 of the 44 variants, the posterior probabilities of pathogenicity were such that altered clinical management would be indicated. Our findings provide a working multifactorial likelihood model for classification that carefully considers mode of ascertainment for gene testing. © 2012 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Laktineh Imad
2010-04-01
Full Text Available This ourse constitutes a brief introduction to probability applications in high energy physis. First the mathematical tools related to the diferent probability conepts are introduced. The probability distributions which are commonly used in high energy physics and their characteristics are then shown and commented. The central limit theorem and its consequences are analysed. Finally some numerical methods used to produce diferent kinds of probability distribution are presented. The full article (17 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.
Ash, Robert B; Lukacs, E
1972-01-01
Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var
Constructing set-valued fundamental diagrams from jamiton solutions in second order traffic models
Seibold, Benjamin
2013-09-01
Fundamental diagrams of vehicular traiic ow are generally multivalued in the congested ow regime. We show that such set-valued fundamental diagrams can be constructed systematically from simple second order macroscopic traiic models, such as the classical Payne-Whitham model or the inhomogeneous Aw-Rascle-Zhang model. These second order models possess nonlinear traveling wave solutions, called jamitons, and the multi-valued parts in the fundamental diagram correspond precisely to jamiton-dominated solutions. This study shows that transitions from function-valued to set-valued parts in a fundamental diagram arise naturally in well-known second order models. As a particular consequence, these models intrinsically reproduce traiic phases. © American Institute of Mathematical Sciences.
Directory of Open Access Journals (Sweden)
Yadira Chinique de Armas
Full Text Available The general lack of well-preserved juvenile skeletal remains from Caribbean archaeological sites has, in the past, prevented evaluations of juvenile dietary changes. Canímar Abajo (Cuba, with a large number of well-preserved juvenile and adult skeletal remains, provided a unique opportunity to fully assess juvenile paleodiets from an ancient Caribbean population. Ages for the start and the end of weaning and possible food sources used for weaning were inferred by combining the results of two Bayesian probability models that help to reduce some of the uncertainties inherent to bone collagen isotope based paleodiet reconstructions. Bone collagen (31 juveniles, 18 adult females was used for carbon and nitrogen isotope analyses. The isotope results were assessed using two Bayesian probability models: Weaning Ages Reconstruction with Nitrogen isotopes and Stable Isotope Analyses in R. Breast milk seems to have been the most important protein source until two years of age with some supplementary food such as tropical fruits and root cultigens likely introduced earlier. After two, juvenile diets were likely continuously supplemented by starch rich foods such as root cultigens and legumes. By the age of three, the model results suggest that the weaning process was completed. Additional indications suggest that animal marine/riverine protein and maize, while part of the Canímar Abajo female diets, were likely not used to supplement juvenile diets. The combined use of both models here provided a more complete assessment of the weaning process for an ancient Caribbean population, indicating not only the start and end ages of weaning but also the relative importance of different food sources for different age juveniles.
Group theoretical construction of two-dimensional models with infinite sets of conservation laws
International Nuclear Information System (INIS)
D'Auria, R.; Regge, T.; Sciuto, S.
1980-01-01
We explicitly construct some classes of field theoretical 2-dimensional models associated with symmetric spaces G/H according to a general scheme proposed in an earlier paper. We treat the SO(n + 1)/SO(n) and SU(n + 1)/U(n) case, giving their relationship with the O(n) sigma-models and the CP(n) models. Moreover, we present a new class of models associated to the SU(n)/SO(n) case. All these models are shown to possess an infinite set of local conservation laws. (orig.)
The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method
DEFF Research Database (Denmark)
Rezanova, Natalia Jurjevna; Ryan, David
The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Using data from the train driver schedule of the Danish passenger railway operator DSB S-tog A/S, a solution method to the Train Driver Recovery Problem (TDRP) is developed. The TDRP...... is formulated as a set partitioning problem. The LP relaxation of the set partitioning formulation of the TDRP possesses strong integer properties. The proposed model is therefore solved via the LP relaxation and Branch & Price. Starting with a small set of drivers and train tasks assigned to the drivers within...
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2013-01-01
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...
Choice Probability Generating Functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...
Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed
2016-12-01
The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.
Selecting an interprofessional education model for a tertiary health care setting.
Menard, Prudy; Varpio, Lara
2014-07-01
The World Health Organization describes interprofessional education (IPE) and collaboration as necessary components of all health professionals' education - in curriculum and in practice. However, no standard framework exists to guide healthcare settings in developing or selecting an IPE model that meets the learning needs of licensed practitioners in practice and that suits the unique needs of their setting. Initially, a broad review of the grey literature (organizational websites, government documents and published books) and healthcare databases was undertaken for existing IPE models. Subsequently, database searches of published papers using Scopus, Scholars Portal and Medline was undertaken. Through this search process five IPE models were identified in the literature. This paper attempts to: briefly outline the five different models of IPE that are presently offered in the literature; and illustrate how a healthcare setting can select the IPE model within their context using Reeves' seven key trends in developing IPE. In presenting these results, the paper contributes to the interprofessional literature by offering an overview of possible IPE models that can be used to inform the implementation or modification of interprofessional practices in a tertiary healthcare setting.
Rocchi, Paolo
2014-01-01
The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.
A Level Set Discontinuous Galerkin Method for Free Surface Flows - and Water-Wave Modeling
DEFF Research Database (Denmark)
Grooss, Jesper
2005-01-01
We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by a level set technique. We describe the discontinuous Galerkin method in general, and its application to the flow equations...... equations in time are discussed. We investigate theory of di erential algebraic equations, and connect the theory to current methods for solving the unsteady fluid flow equations. We explore the use of a semi-implicit spectral deferred correction method having potential to achieve high temporal order....... The deferred correction method is applied on the fluid flow equations and show good results in periodic domains. We describe the design of a level set method for the free surface modeling. The level set utilize the high order accurate discontinuous Galerkin method fully and represent smooth surfaces very...
Rixen, M.; Ferreira-Coelho, E.; Signell, R.
2008-01-01
Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).
Huang, Ying Che; Chang, Kuang Yi; Lin, Shih Pin; Chen, Kung; Chan, Kwok Hon; Chang, Polun
2013-08-01
As studies have pointed out, severity scores are imperfect at predicting individual clinical chance of survival. The clinical condition and pathophysiological status of these patients in the Intensive Care Unit might differ from or be more complicated than most predictive models account for. In addition, as the pathophysiological status changes over time, the likelihood of survival day by day will vary. Actually, it would decrease over time and a single prediction value cannot address this truth. Clearly, alternative models and refinements are warranted. In this study, we used discrete-time-event models with the changes of clinical variables, including blood cell counts, to predict daily probability of mortality in individual patients from day 3 to day 28 post Intensive Care Unit admission. Both models we built exhibited good discrimination in the training (overall area under ROC curve: 0.80 and 0.79, respectively) and validation cohorts (overall area under ROC curve: 0.78 and 0.76, respectively) to predict daily ICU mortality. The paper describes the methodology, the development process and the content of the models, and discusses the possibility of them to serve as the foundation of a new bedside advisory or alarm system. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
DEFF Research Database (Denmark)
Watling, David Paul; Rasmussen, Thomas Kjær; Prato, Carlo Giacomo
2015-01-01
the advantages of the two principles, namely the definition of unused routes in DUE and of mis-perception in SUE, such that the resulting choice sets of used routes are equilibrated. Two model families are formulated to address this issue: the first is a general version of SUE permitting bounded and discrete...... error distributions; the second is a Restricted SUE model with an additional constraint that must be satisfied for unused paths. The overall advantage of these model families consists in their ability to combine the unused routes with the use of random utility models for used routes, without the need...... to pre-specify the choice set. We present model specifications within these families, show illustrative examples, evaluate their relative merits, and identify key directions for further research....
Neural model for learning-to-learn of novel task sets in the motor domain.
Pitti, Alexandre; Braud, Raphaël; Mahé, Sylvain; Quoy, Mathias; Gaussier, Philippe
2013-01-01
During development, infants learn to differentiate their motor behaviors relative to various contexts by exploring and identifying the correct structures of causes and effects that they can perform; these structures of actions are called task sets or internal models. The ability to detect the structure of new actions, to learn them and to select on the fly the proper one given the current task set is one great leap in infants cognition. This behavior is an important component of the child's ability of learning-to-learn, a mechanism akin to the one of intrinsic motivation that is argued to drive cognitive development. Accordingly, we propose to model a dual system based on (1) the learning of new task sets and on (2) their evaluation relative to their uncertainty and prediction error. The architecture is designed as a two-level-based neural system for context-dependent behavior (the first system) and task exploration and exploitation (the second system). In our model, the task sets are learned separately by reinforcement learning in the first network after their evaluation and selection in the second one. We perform two different experimental setups to show the sensorimotor mapping and switching between tasks, a first one in a neural simulation for modeling cognitive tasks and a second one with an arm-robot for motor task learning and switching. We show that the interplay of several intrinsic mechanisms drive the rapid formation of the neural populations with respect to novel task sets.
Directory of Open Access Journals (Sweden)
Ilam Pratitis
2015-11-01
Full Text Available This study aims to determine the effect of the application of learning model with advance organizer envisions SETS to increase mastery of chemistry concepts in the high school in Semarang on buffer solution material. The design used in this research is the design of the control group non equivalent. Sampling was conducted with a purposive sampling technique, and obtained a XI 6 science grade as experimental class and class XI 5 science grade as control class. Data collection method used is the method of documentation, testing, observation, and questionnaires. The results showed that the average cognitive achievement of experimental class was 84, while the control class was 82. The result of data analysis showed that the effect of the application of learning model with advance organizer envisions SETS was able to increase the mastery of chemical concepts of 4%, with a correlation rate of 0.2. Based on the results, it can be concluded that the learning model with advance organizer envisions SETS had positive effect of increasing mastery of the concept of chemistry on buffer solution material. The advice given is learning model with organizer envisions SETS should also be applied to other chemistry materials. This is of course accompanied by a change in order to suit the needs of its effect on learning outcomes in the form of concept mastery of chemistry to be more increased.Keywords: Advance Organizer, Buffer Solution, Concept Mastery, SETS
Flood hazard probability mapping method
Kalantari, Zahra; Lyon, Steve; Folkeson, Lennart
2015-04-01
In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. Factors such as topography, land use, soil data and other PCDs were analysed in terms of their relative importance for flood generation. The specific objective was to test the methodology using statistical methods to identify factors having a significant role on controlling flooding. A second objective was to generate an index quantifying flood probability value for each cell, based on different weighted factors, in order to provide a more accurate analysis of potential high flood hazards than can be obtained using just a single variable. The ability of indicator covariance to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. By using PCD data, realistic representations of high probability flood regions was made, despite the magnitude of rain events. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.
Lombardi, A; Faginas-Lago, N; Pacifici, L; Grossi, G
2015-07-21
Carbon dioxide molecules can store and release tens of kcal/mol upon collisions, and such an energy transfer strongly influences the energy disposal and the chemical processes in gases under the extreme conditions typical of plasmas and hypersonic flows. Moreover, the energy transfer involving CO2 characterizes the global dynamics of the Earth-atmosphere system and the energy balance of other planetary atmospheres. Contemporary developments in kinetic modeling of gaseous mixtures are connected to progress in the description of the energy transfer, and, in particular, the attempts to include non-equilibrium effects require to consider state-specific energy exchanges. A systematic study of the state-to-state vibrational energy transfer in CO2 + CO2 collisions is the focus of the present work, aided by a theoretical and computational tool based on quasiclassical trajectory simulations and an accurate full-dimension model of the intermolecular interactions. In this model, the accuracy of the description of the intermolecular forces (that determine the probability of energy transfer in molecular collisions) is enhanced by explicit account of the specific effects of the distortion of the CO2 structure due to vibrations. Results show that these effects are important for the energy transfer probabilities. Moreover, the role of rotational and vibrational degrees of freedom is found to be dominant in the energy exchange, while the average contribution of translations, under the temperature and energy conditions considered, is negligible. Remarkable is the fact that the intramolecular energy transfer only involves stretching and bending, unless one of the colliding molecules has an initial symmetric stretching quantum number greater than a threshold value estimated to be equal to 7.
What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models
Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman
2018-01-01
Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.
DECOFF Probabilities of Failed Operations
DEFF Research Database (Denmark)
Gintautas, Tomas
2015-01-01
A statistical procedure of estimation of Probabilities of Failed Operations is described and exemplified using ECMWF weather forecasts and SIMO output from Rotor Lift test case models. Also safety factor influence is investigated. DECOFF statistical method is benchmarked against standard Alpha...
Fillenwarth, Brian Albert
As large countries such as China begin to industrialize and concerns about global warming continue to grow, there is an increasing need for more environmentally friendly building materials. One promising material known as a geopolymer can be used as a portland cement replacement and in this capacity emits around 67% less carbon dioxide. In addition to potentially reducing carbon emissions, geopolymers can be synthesized with many industrial waste products such as fly ash. Although the benefits of geopolymers are substantial, there are a few difficulties with designing geopolymer mixes which have hindered widespread commercialization of the material. One such difficulty is the high variability of the materials used for their synthesis. In addition to this, interrelationships between mix design variables and how these interrelationships impact the set behavior and compressive strength are not well understood. A third complicating factor with designing geopolymer mixes is that the role of calcium in these systems is not well understood. In order to overcome these barriers, this study developed predictive optimization models through the use of genetic programming with experimentally collected set times and compressive strengths of several geopolymer paste mixes. The developed set behavior models were shown to predict the correct set behavior from the mix design over 85% of the time. The strength optimization model was shown to be capable of predicting compressive strengths of geopolymer pastes from their mix design to within about 1 ksi of their actual strength. In addition to this the optimization models give valuable insight into the key factors influencing strength development as well as the key factors responsible for flash set and long set behaviors in geopolymer pastes. A method for designing geopolymer paste mixes was developed from the generated optimization models. This design method provides an invaluable tool for use in future geopolymer research as well as
Billingsley, Patrick
2012-01-01
Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this
Probability, statistics, and computational science.
Beerenwinkel, Niko; Siebourg, Juliane
2012-01-01
In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.
Directory of Open Access Journals (Sweden)
Gastón S. Milanesi
2016-11-01
probabilities of financial distress. The exotic barrier options make an alternative approach for predicting financial distress, and its structure fits better to the firm valuevolatility relationship. The paper proposes a “naive” barrier option model, because it simplifies the estimation of the unobservable variables, like firm asset’s value and risk. First, a simple call and barrier option models are developed in order to value the firm’s capital and estimate the financial distress probability. Using an hypothetical case, it is proposed a sensibility exercise over period and volatility. Similar exercise is applied to estimate the capital value and financial distress probability over two firms of Argentinian capitals, with different leverage degree, confirming the consistency in the relationship between volatility-value-financial distress probability of the proposed model. Finally, the main conclusions are shown.