WorldWideScience

Sample records for calculating age-conditional probabilities

  1. Multiregion, multigroup collision probability method with white boundary condition for light water reactor thermalization calculations

    International Nuclear Information System (INIS)

    Ozgener, B.; Ozgener, H.A.

    2005-01-01

    A multiregion, multigroup collision probability method with white boundary condition is developed for thermalization calculations of light water moderated reactors. Hydrogen scatterings are treated by Nelkin's kernel while scatterings from other nuclei are assumed to obey the free-gas scattering kernel. The isotropic return (white) boundary condition is applied directly by using the appropriate collision probabilities. Comparisons with alternate numerical methods show the validity of the present formulation. Comparisons with some experimental results indicate that the present formulation is capable of calculating disadvantage factors which are closer to the experimental results than alternative methods

  2. Calculation of the Incremental Conditional Core Damage Probability on the Extension of Allowed Outage Time

    International Nuclear Information System (INIS)

    Kang, Dae Il; Han, Sang Hoon

    2006-01-01

    RG 1.177 requires that the conditional risk (incremental conditional core damage probability and incremental conditional large early release probability: ICCDP and ICLERP), given that a specific component is out of service (OOS), be quantified for a permanent change of the allowed outage time (AOT) of a safety system. An AOT is the length of time that a particular component or system is permitted to be OOS while the plant is operating. The ICCDP is defined as: ICCDP = [(conditional CDF with the subject equipment OOS)- (baseline CDF with nominal expected equipment unavailabilities)] [duration of the single AOT under consideration]. Any event enabling the component OOS can initiate the time clock for the limiting condition of operation for a nuclear power plant. Thus, the largest ICCDP among the ICCDPs estimated from any occurrence of the basic events for the component fault tree should be selected for determining whether the AOT can be extended or not. If the component is under a preventive maintenance, the conditional risk can be straightforwardly calculated without changing the CCF probability. The main concern is the estimations of the CCF probability because there are the possibilities of the failures of other similar components due to the same root causes. The quantifications of the risk, given that a subject equipment is in a failed state, are performed by setting the identified event of subject equipment to TRUE. The CCF probabilities are also changed according to the identified failure cause. In the previous studies, however, the ICCDP was quantified with the consideration of the possibility of a simultaneous occurrence of two CCF events. Based on the above, we derived the formulas of the CCF probabilities for the cases where a specific component is in a failed state and we presented sample calculation results of the ICCDP for the low pressure safety injection system (LPSIS) of Ulchin Unit 3

  3. Calculating the Probability of Returning a Loan with Binary Probability Models

    Directory of Open Access Journals (Sweden)

    Julian Vasilev

    2014-12-01

    Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.

  4. Thermal disadvantage factor calculation by the multiregion collision probability method

    International Nuclear Information System (INIS)

    Ozgener, B.; Ozgener, H.A.

    2004-01-01

    A multi-region collision probability formulation that is capable of applying white boundary condition directly is presented and applied to thermal neutron transport problems. The disadvantage factors computed are compared with their counterparts calculated by S N methods with both direct and indirect application of white boundary condition. The results of the ABH and collision probability method with indirect application of white boundary condition are also considered and comparisons with benchmark Monte Carlo results are carried out. The studies show that the proposed formulation is capable of calculating thermal disadvantage factor with sufficient accuracy without resorting to the fictitious scattering outer shell approximation associated with the indirect application of the white boundary condition in collision probability solutions

  5. An Alternative Version of Conditional Probabilities and Bayes' Rule: An Application of Probability Logic

    Science.gov (United States)

    Satake, Eiki; Amato, Philip P.

    2008-01-01

    This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…

  6. 47 CFR 1.1623 - Probability calculation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be...

  7. Probability in reasoning: a developmental test on conditionals.

    Science.gov (United States)

    Barrouillet, Pierre; Gauffroy, Caroline

    2015-04-01

    Probabilistic theories have been claimed to constitute a new paradigm for the psychology of reasoning. A key assumption of these theories is captured by what they call the Equation, the hypothesis that the meaning of the conditional is probabilistic in nature and that the probability of If p then q is the conditional probability, in such a way that P(if p then q)=P(q|p). Using the probabilistic truth-table task in which participants are required to evaluate the probability of If p then q sentences, the present study explored the pervasiveness of the Equation through ages (from early adolescence to adulthood), types of conditionals (basic, causal, and inducements) and contents. The results reveal that the Equation is a late developmental achievement only endorsed by a narrow majority of educated adults for certain types of conditionals depending on the content they involve. Age-related changes in evaluating the probability of all the conditionals studied closely mirror the development of truth-value judgements observed in previous studies with traditional truth-table tasks. We argue that our modified mental model theory can account for this development, and hence for the findings related with the probability task, which do not consequently support the probabilistic approach of human reasoning over alternative theories. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Probability calculations for three-part mineral resource assessments

    Science.gov (United States)

    Ellefsen, Karl J.

    2017-06-27

    Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.

  9. Irreversibility and conditional probability

    International Nuclear Information System (INIS)

    Stuart, C.I.J.M.

    1989-01-01

    The mathematical entropy - unlike physical entropy - is simply a measure of uniformity for probability distributions in general. So understood, conditional entropies have the same logical structure as conditional probabilities. If, as is sometimes supposed, conditional probabilities are time-reversible, then so are conditional entropies and, paradoxically, both then share this symmetry with physical equations of motion. The paradox is, of course that probabilities yield a direction to time both in statistical mechanics and quantum mechanics, while the equations of motion do not. The supposed time-reversibility of both conditionals seems also to involve a form of retrocausality that is related to, but possibly not the same as, that described by Costa de Beaurgard. The retrocausality is paradoxically at odds with the generally presumed irreversibility of the quantum mechanical measurement process. Further paradox emerges if the supposed time-reversibility of the conditionals is linked with the idea that the thermodynamic entropy is the same thing as 'missing information' since this confounds the thermodynamic and mathematical entropies. However, it is shown that irreversibility is a formal consequence of conditional entropies and, hence, of conditional probabilities also. 8 refs. (Author)

  10. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures

    Science.gov (United States)

    Sloma, Michael F.; Mathews, David H.

    2016-01-01

    RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924

  11. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  12. Calculation of the quantities of radiation risk in Japanese population

    International Nuclear Information System (INIS)

    Nakamura, Yuji

    1993-01-01

    The purpose of this study was to reevaluate various kinds of indicators of radiation risks using additive projection and multiplicative projection models, as proposed by ICRP. Total death probability rate (1985) and probability rate of cancer death (1983 to 1987) were used as data base. The following indicators were calculated: total conditional death probability rate and conditional death probability rate; normalized death age probability density and unconditional death probability rate; attributable life-time probability of cancer death; and other risk indicators, including mean loss of life expectancy, reduction of life expectancy, mean annually committed probability of attributable cancer deaths, annual extra probability of cancer death, probability density of the age of death, maximum relative death probability rate (age at maximum relative rate), and probabilistic aging. In terms of calculations of these risk indicators for the comprehensive cancer death, there was no great difference between the Japanese population and ICRP. When calculating according to sites of cancer, calculations of indicators for cancer mortality (or cancer cure rate) in the Japanese population might bedifferent from ICRP's calculation. (N.K.) different from ICRP's calculations. (N.K.)

  13. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    Science.gov (United States)

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  14. Method to Calculate Accurate Top Event Probability in a Seismic PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong Univ., Seoul (Korea, Republic of)

    2014-05-15

    ACUBE(Advanced Cutset Upper Bound Estimator) calculates the top event probability and importance measures from cutsets by dividing cutsets into major and minor groups depending on the cutset probability, where the cutsets that have higher cutset probability are included in the major group and the others in minor cutsets, converting major cutsets into a Binary Decision Diagram (BDD). By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. ACUBE works by dividing the cutsets into two groups (higher and lower cutset probability groups), calculating the top event probability and importance measures in each group, and combining the two results from the two groups. Here, ACUBE calculates the top event probability and importance measures of the higher cutset probability group exactly. On the other hand, ACUBE calculates these measures of the lower cutset probability group with an approximation such as MCUB. The ACUBE algorithm is useful for decreasing the conservatism that is caused by approximating the top event probability and importance measure calculations with given cutsets. By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. This study shows that careful attention should be paid and an appropriate method be provided in order to avoid the significant overestimation of the top event probability calculation. Due to the strength of ACUBE that is explained in this study, the ACUBE became a vital tool for calculating more accurate CDF of the seismic PSA cutsets than the conventional probability calculation method.

  15. Conditional Probability Modulates Visual Search Efficiency

    Directory of Open Access Journals (Sweden)

    Bryan eCort

    2013-10-01

    Full Text Available We investigated the effects of probability on visual search. Previous work has shown that people can utilize spatial and sequential probability information to improve target detection. We hypothesized that performance improvements from probability information would extend to the efficiency of visual search. Our task was a simple visual search in which the target was always present among a field of distractors, and could take one of two colors. The absolute probability of the target being either color was 0.5; however, the conditional probability – the likelihood of a particular color given a particular combination of two cues – varied from 0.1 to 0.9. We found that participants searched more efficiently for high conditional probability targets and less efficiently for low conditional probability targets, but only when they were explicitly informed of the probability relationship between cues and target color.

  16. Conditional Probabilities in the Excursion Set Theory. Generic Barriers and non-Gaussian Initial Conditions

    CERN Document Server

    De Simone, Andrea; Riotto, Antonio

    2011-01-01

    The excursion set theory, where density perturbations evolve stochastically with the smoothing scale, provides a method for computing the dark matter halo mass function. The computation of the mass function is mapped into the so-called first-passage time problem in the presence of a moving barrier. The excursion set theory is also a powerful formalism to study other properties of dark matter halos such as halo bias, accretion rate, formation time, merging rate and the formation history of halos. This is achieved by computing conditional probabilities with non-trivial initial conditions, and the conditional two-barrier first-crossing rate. In this paper we use the recently-developed path integral formulation of the excursion set theory to calculate analytically these conditional probabilities in the presence of a generic moving barrier, including the one describing the ellipsoidal collapse, and for both Gaussian and non-Gaussian initial conditions. The non-Markovianity of the random walks induced by non-Gaussi...

  17. Towards a Categorical Account of Conditional Probability

    Directory of Open Access Journals (Sweden)

    Robert Furber

    2015-11-01

    Full Text Available This paper presents a categorical account of conditional probability, covering both the classical and the quantum case. Classical conditional probabilities are expressed as a certain "triangle-fill-in" condition, connecting marginal and joint probabilities, in the Kleisli category of the distribution monad. The conditional probabilities are induced by a map together with a predicate (the condition. The latter is a predicate in the logic of effect modules on this Kleisli category. This same approach can be transferred to the category of C*-algebras (with positive unital maps, whose predicate logic is also expressed in terms of effect modules. Conditional probabilities can again be expressed via a triangle-fill-in property. In the literature, there are several proposals for what quantum conditional probability should be, and also there are extra difficulties not present in the classical case. At this stage, we only describe quantum systems with classical parametrization.

  18. An Alternative Teaching Method of Conditional Probabilities and Bayes' Rule: An Application of the Truth Table

    Science.gov (United States)

    Satake, Eiki; Vashlishan Murray, Amy

    2015-01-01

    This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…

  19. Calculating the albedo characteristics by the method of transmission probabilities

    International Nuclear Information System (INIS)

    Lukhvich, A.A.; Rakhno, I.L.; Rubin, I.E.

    1983-01-01

    The possibility to use the method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones is studied. The transmission probabilities method is a numerical method for solving the transport equation in the integrated form. All calculations have been conducted as a one-group approximation for the planes and rods with different optical thicknesses and capture-to-scattering ratios. Above calculations for plane and cylindrical geometries have shown the possibility to use the numerical method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones with high accuracy. In this case the computer time consumptions are minimum even with the cylindrical geometry, if the interpolation calculation of characteristics is used for the neutrons of the first path

  20. Stage line diagram: an age-conditional reference diagram for tracking development.

    Science.gov (United States)

    van Buuren, Stef; Ooms, Jeroen C L

    2009-05-15

    This paper presents a method for calculating stage line diagrams, a novel type of reference diagram useful for tracking developmental processes over time. Potential fields of applications include: dentistry (tooth eruption), oncology (tumor grading, cancer staging), virology (HIV infection and disease staging), psychology (stages of cognitive development), human development (pubertal stages) and chronic diseases (stages of dementia). Transition probabilities between successive stages are modeled as smoothly varying functions of age. Age-conditional references are calculated from the modeled probabilities by the mid-P value. It is possible to eliminate the influence of age by calculating standard deviation scores (SDS). The method is applied to the empirical data to produce reference charts on secondary sexual maturation. The mean of the empirical SDS in the reference population is close to zero, whereas the variance depends on age. The stage line diagram provides quick insight into both status (in SDS) and tempo (in SDS/year) of development of an individual child. Other measures (e.g. height SDS, body mass index SDS) from the same child can be added to the chart. Diagrams for sexual maturation are available as a web application at http://vps.stefvanbuuren.nl/puberty. The stage line diagram expresses status and tempo of discrete changes on a continuous scale. Wider application of these measures scores opens up new analytic possibilities. (c) 2009 John Wiley & Sons, Ltd.

  1. Calculation of transition probabilities using the multiconfiguration Dirac-Fock method

    International Nuclear Information System (INIS)

    Kim, Yong Ki; Desclaux, Jean Paul; Indelicato, Paul

    1998-01-01

    The performance of the multiconfiguration Dirac-Fock (MCDF) method in calculating transition probabilities of atoms is reviewed. In general, the MCDF wave functions will lead to transition probabilities accurate to ∼ 10% or better for strong, electric-dipole allowed transitions for small atoms. However, it is more difficult to get reliable transition probabilities for weak transitions. Also, some MCDF wave functions for a specific J quantum number may not reduce to the appropriate L and S quantum numbers in the nonrelativistic limit. Transition probabilities calculated from such MCDF wave functions for nonrelativistically forbidden transitions are unreliable. Remedies for such cases are discussed

  2. Conditional probability of the tornado missile impact given a tornado occurrence

    International Nuclear Information System (INIS)

    Goodman, J.; Koch, J.E.

    1982-01-01

    Using an approach based on statistical mechanics, an expression for the probability of the first missile strike is developed. The expression depends on two generic parameters (injection probability eta(F) and height distribution psi(Z,F)), which are developed in this study, and one plant specific parameter (number of potential missiles N/sub p/). The expression for the joint probability of simultaneous impact of muitiple targets is also developed. This espression is applicable to calculation of the probability of common cause failure due to tornado missiles. It is shown that the probability of the first missile strike can be determined using a uniform missile distribution model. It is also shown that the conditional probability of the second strike, given the first, is underestimated by the uniform model. The probability of the second strike is greatly increased if the missiles are in clusters large enough to cover both targets

  3. Modeling highway travel time distribution with conditional probability models

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  4. Students' Understanding of Conditional Probability on Entering University

    Science.gov (United States)

    Reaburn, Robyn

    2013-01-01

    An understanding of conditional probability is essential for students of inferential statistics as it is used in Null Hypothesis Tests. Conditional probability is also used in Bayes' theorem, in the interpretation of medical screening tests and in quality control procedures. This study examines the understanding of conditional probability of…

  5. Collective probabilities algorithm for surface hopping calculations

    International Nuclear Information System (INIS)

    Bastida, Adolfo; Cruz, Carlos; Zuniga, Jose; Requena, Alberto

    2003-01-01

    General equations that transition probabilities of the hopping algorithms in surface hopping calculations must obey to assure the equality between the average quantum and classical populations are derived. These equations are solved for two particular cases. In the first it is assumed that probabilities are the same for all trajectories and that the number of hops is kept to a minimum. These assumptions specify the collective probabilities (CP) algorithm, for which the transition probabilities depend on the average populations for all trajectories. In the second case, the probabilities for each trajectory are supposed to be completely independent of the results from the other trajectories. There is, then, a unique solution of the general equations assuring that the transition probabilities are equal to the quantum population of the target state, which is referred to as the independent probabilities (IP) algorithm. The fewest switches (FS) algorithm developed by Tully is accordingly understood as an approximate hopping algorithm which takes elements from the accurate CP and IP solutions. A numerical test of all these hopping algorithms is carried out for a one-dimensional two-state problem with two avoiding crossings which shows the accuracy and computational efficiency of the collective probabilities algorithm proposed, the limitations of the FS algorithm and the similarity between the results offered by the IP algorithm and those obtained with the Ehrenfest method

  6. [Conditional probability analysis between tinnitus and comorbidities in patients attending the National Rehabilitation Institute-LGII in the period 2012-2013].

    Science.gov (United States)

    Gómez Toledo, Verónica; Gutiérrez Farfán, Ileana; Verduzco-Mendoza, Antonio; Arch-Tirado, Emilio

    Tinnitus is defined as the conscious perception of a sensation of sound that occurs in the absence of an external stimulus. This audiological symptom affects 7% to 19% of the adult population. The aim of this study is to describe the associated comorbidities present in patients with tinnitus usingjoint and conditional probability analysis. Patients of both genders, diagnosed with unilateral or bilateral tinnitus, aged between 20 and 45 years, and had a full computerised medical record, were selected. Study groups were formed on the basis of the following clinical aspects: 1) audiological findings; 2) vestibular findings; 3) comorbidities such as, temporomandibular dysfunction, tubal dysfunction, otosclerosis and, 4) triggering factors of tinnitus noise exposure, respiratory tract infection, use of ototoxic and/or drugs. Of the patients with tinnitus, 27 (65%) reported hearing loss, 11 (26.19%) temporomandibular dysfunction, and 11 (26.19%) with vestibular disorders. When performing the joint probability analysis, it was found that the probability that a patient with tinnitus having hearing loss was 2742 0.65, and 2042 0.47 for bilateral type. The result for P (A ∩ B)=30%. Bayes' theorem P (AiB) = P(Ai∩B)P(B) was used, and various probabilities were calculated. Therefore, in patients with temporomandibulardysfunction and vestibular disorders, a posterior probability of P (Aі/B)=31.44% was calculated. Consideration should be given to the joint and conditional probability approach as tools for the study of different pathologies. Copyright © 2016 Academia Mexicana de Cirugía A.C. Publicado por Masson Doyma México S.A. All rights reserved.

  7. Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures

    Science.gov (United States)

    Prodromou, Theodosia

    2016-01-01

    In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…

  8. On calculating the probability of a set of orthologous sequences

    Directory of Open Access Journals (Sweden)

    Junfeng Liu

    2009-02-01

    Full Text Available Junfeng Liu1,2, Liang Chen3, Hongyu Zhao4, Dirk F Moore1,2, Yong Lin1,2, Weichung Joe Shih1,21Biometrics Division, The Cancer, Institute of New Jersey, New Brunswick, NJ, USA; 2Department of Biostatistics, School of Public Health, University of Medicine and Dentistry of New Jersey, Piscataway, NJ, USA; 3Department of Biological Sciences, University of Southern California, Los Angeles, CA, USA; 4Department of Epidemiology and Public Health, Yale University School of Medicine, New Haven, CT, USAAbstract: Probabilistic DNA sequence models have been intensively applied to genome research. Within the evolutionary biology framework, this article investigates the feasibility for rigorously estimating the probability of a set of orthologous DNA sequences which evolve from a common progenitor. We propose Monte Carlo integration algorithms to sample the unknown ancestral and/or root sequences a posteriori conditional on a reference sequence and apply pairwise Needleman–Wunsch alignment between the sampled and nonreference species sequences to estimate the probability. We test our algorithms on both simulated and real sequences and compare calculated probabilities from Monte Carlo integration to those induced by single multiple alignment.Keywords: evolution, Jukes–Cantor model, Monte Carlo integration, Needleman–Wunsch alignment, orthologous

  9. The Probability Approach to English If-Conditional Sentences

    Science.gov (United States)

    Wu, Mei

    2012-01-01

    Users of the Probability Approach choose the right one from four basic types of conditional sentences--factual, predictive, hypothetical and counterfactual conditionals, by judging how likely (i.e. the probability) the event in the result-clause will take place when the condition in the if-clause is met. Thirty-three students from the experimental…

  10. The risk of major nuclear accident: calculation and perception of probabilities

    International Nuclear Information System (INIS)

    Leveque, Francois

    2013-01-01

    Whereas before the Fukushima accident, already eight major accidents occurred in nuclear power plants, a number which is higher than that expected by experts and rather close to that corresponding of people perception of risk, the author discusses how to understand these differences and reconcile observations, objective probability of accidents and subjective assessment of risks, why experts have been over-optimistic, whether public opinion is irrational regarding nuclear risk, and how to measure risk and its perception. Thus, he addresses and discusses the following issues: risk calculation (cost, calculated frequency of major accident, bias between the number of observed accidents and model predictions), perceived probabilities and aversion for disasters (perception biases of probability, perception biases unfavourable to nuclear), the Bayes contribution and its application (Bayes-Laplace law, statistics, choice of an a priori probability, prediction of the next event, probability of a core fusion tomorrow)

  11. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  12. Study on conditional probability of surface rupture: effect of fault dip and width of seismogenic layer

    Science.gov (United States)

    Inoue, N.

    2017-12-01

    The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source

  13. Conditional probabilities in Ponzano-Regge minisuperspace

    International Nuclear Information System (INIS)

    Petryk, Roman; Schleich, Kristin

    2003-01-01

    We examine the Hartle-Hawking no-boundary initial state for the Ponzano-Regge formulation of gravity in three dimensions. We consider the behavior of conditional probabilities and expectation values for geometrical quantities in this initial state for a simple minisuperspace model consisting of a two-parameter set of anisotropic geometries on a 2-sphere boundary. We find dependence on the cutoff used in the construction of Ponzano-Regge amplitudes for expectation values of edge lengths. However, these expectation values are cutoff independent when computed in certain, but not all, conditional probability distributions. Conditions that yield cutoff independent expectation values are those that constrain the boundary geometry to a finite range of edge lengths. We argue that such conditions have a correspondence to fixing a range of local time, as classically associated with the area of a surface for spatially closed cosmologies. Thus these results may hint at how classical spacetime emerges from quantum amplitudes

  14. Determination of the failure probability in the weld region of ap-600 vessel for transient condition

    International Nuclear Information System (INIS)

    Wahyono, I.P.

    1997-01-01

    Failure probability in the weld region of AP-600 vessel was determined for transient condition scenario. The type of transient is increase of the heat removal from primary cooling system due to sudden opening of safety valves or steam relief valves on the secondary cooling system or the steam generator. Temperature and pressure in the vessel was considered as the base of deterministic calculation of the stress intensity factor. Calculation of film coefficient of the convective heat transfers is a function of the transient time and water parameter. Pressure, material temperature, flaw depth and transient time are variables for the stress intensity factor. Failure probability consideration was done by using the above information in regard with the flaw and probability distributions of Octavia II and Marshall. Calculation of the failure probability by probability fracture mechanic simulation is applied on the weld region. Failure of the vessel is assumed as a failure of the weld material with one crack which stress intensity factor applied is higher than the critical stress intensity factor. VISA II code (Vessel Integrity Simulation Analysis II) was used for deterministic calculation and simulation. Failure probability of the material is 1.E-5 for Octavia II distribution and 4E-6 for marshall distribution for each transient event postulated. The failure occurred at the 1.7th menit of the initial transient under 12.53 ksi of the pressure

  15. Monte Carlo methods to calculate impact probabilities

    Science.gov (United States)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  16. Retrocausality and conditional probability

    International Nuclear Information System (INIS)

    Stuart, C.I.J.M.

    1989-01-01

    Costa de Beauregard has proposed that physical causality be identified with conditional probability. The proposal is shown to be vulnerable on two accounts. The first, though mathematically trivial, seems to be decisive so far as the current formulation of the proposal is concerned. The second lies in a physical inconsistency which seems to have its source in a Copenhagenlike disavowal of realism in quantum mechanics. 6 refs. (Author)

  17. Conditional probability on MV-algebras

    Czech Academy of Sciences Publication Activity Database

    Kroupa, Tomáš

    2005-01-01

    Roč. 149, č. 2 (2005), s. 369-381 ISSN 0165-0114 R&D Projects: GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditional probability * tribe * MV-algebra Subject RIV: BA - General Mathematics Impact factor: 1.039, year: 2005

  18. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  19. No shortcut solution to the problem of Y-STR match probability calculation.

    Science.gov (United States)

    Caliebe, Amke; Jochens, Arne; Willuweit, Sascha; Roewer, Lutz; Krawczak, Michael

    2015-03-01

    Match probability calculation is deemed much more intricate for lineage genetic markers, including Y-chromosomal short tandem repeats (Y-STRs), than for autosomal markers. This is because, owing to the lack of recombination, strong interdependence between markers is likely, which implies that haplotype frequency estimates cannot simply be obtained through the multiplication of allele frequency estimates. As yet, however, the practical relevance of this problem has not been studied in much detail using real data. In fact, such scrutiny appears well warranted because the high mutation rates of Y-STRs and the possibility of backward mutation should have worked against the statistical association of Y-STRs. We examined haplotype data of 21 markers included in the PowerPlex(®)Y23 set (PPY23, Promega Corporation, Madison, WI) originating from six different populations (four European and two Asian). Assessing the conditional entropies of the markers, given different subsets of markers from the same panel, we demonstrate that the PowerPlex(®)Y23 set cannot be decomposed into smaller marker subsets that would be (conditionally) independent. Nevertheless, in all six populations, >94% of the joint entropy of the 21 markers is explained by the seven most rapidly mutating markers. Although this result might render a reduction in marker number a sensible option for practical casework, the partial haplotypes would still be almost as diverse as the full haplotypes. Therefore, match probability calculation remains difficult and calls for the improvement of currently available methods of haplotype frequency estimation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    Science.gov (United States)

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.

  1. Calculation of magnetization curves and probability distribution for monoclinic and uniaxial systems

    International Nuclear Information System (INIS)

    Sobh, Hala A.; Aly, Samy H.; Yehia, Sherif

    2013-01-01

    We present the application of a simple classical statistical mechanics-based model to selected monoclinic and hexagonal model systems. In this model, we treat the magnetization as a classical vector whose angular orientation is dictated by the laws of equilibrium classical statistical mechanics. We calculate for these anisotropic systems, the magnetization curves, energy landscapes and probability distribution for different sets of relevant parameters and magnetic fields of different strengths and directions. Our results demonstrate a correlation between the most probable orientation of the magnetization vector, the system's parameters, and the external magnetic field. -- Highlights: ► We calculate magnetization curves and probability angular distribution of the magnetization. ► The magnetization curves are consistent with probability results for the studied systems. ► Monoclinic and hexagonal systems behave differently due to their different anisotropies

  2. The Probable Ages of Asteroid Families

    Science.gov (United States)

    Harris, A. W.

    1993-01-01

    There has been considerable debate recently over the ages of the Hirayama families, and in particular if some of the families are very oung(u) It is a straightforward task to estimate the characteristic time of a collision between a body of a given diameter, d_o, by another body of diameter greater of equal to d_1. What is less straightforward is to estimate the critical diameter ratio, d_1/d_o, above which catastrophic disruption occurs, from which one could infer probable ages of the Hirayama families, by knowing the diameter of the parent body, d_o. One can gain some insight into the probable value of d_1/d_o, and of the likely ages of existing families, from the plot below. I have computed the characteristic time between collisions in the asteroid belt of a size ratio greater of equal to d_1/d_o, for 4 sizes of target asteroids, d_o. The solid curves to the lower right are the characteristic times for a single object...

  3. ELIPGRID-PC: A PC program for calculating hot spot probabilities

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1994-10-01

    ELIPGRID-PC, a new personal computer program has been developed to provide easy access to Singer's 1972 ELIPGRID algorithm for hot-spot detection probabilities. Three features of the program are the ability to determine: (1) the grid size required for specified conditions, (2) the smallest hot spot that can be sampled with a given probability, and (3) the approximate grid size resulting from specified conditions and sampling cost. ELIPGRID-PC also provides probability of hit versus cost data for graphing with spread-sheets or graphics software. The program has been successfully tested using Singer's published ELIPGRID results. An apparent error in the original ELIPGRID code has been uncovered and an appropriate modification incorporated into the new program

  4. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  5. Age, Loss Minimization, and the Role of Probability for Decision-Making.

    Science.gov (United States)

    Best, Ryan; Freund, Alexandra M

    2018-04-05

    Older adults are stereotypically considered to be risk averse compared to younger age groups, although meta-analyses on age and the influence of gain/loss framing on risky choices have not found empirical evidence for age differences in risk-taking. The current study extends the investigation of age differences in risk preference by including analyses on the effect of the probability of a risky option on choices in gain versus loss situations. Participants (n = 130 adults aged 19-80 years) chose between a certain option and a risky option of varying probability in gain- and loss-framed gambles with actual monetary outcomes. Only younger adults displayed an overall framing effect. Younger and older adults responded differently to probability fluctuations depending on the framing condition. Older adults were more likely to choose the risky option as the likelihood of avoiding a larger loss increased and as the likelihood of a larger gain decreased. Younger adults responded with the opposite pattern: they were more likely to choose the risky option as the likelihood of a larger gain increased and as the likelihood of avoiding a (slightly) larger loss decreased. Results suggest that older adults are more willing to select a risky option when it increases the likelihood that larger losses be avoided, whereas younger adults are more willing to select a risky option when it allows for slightly larger gains. This finding supports expectations based on theoretical accounts of goal orientation shifting away from securing gains in younger adulthood towards maintenance and avoiding losses in older adulthood. Findings are also discussed in respect to the affective enhancement perspective and socioemotional selectivity theory. © 2018 S. Karger AG, Basel.

  6. A semi-mechanistic approach to calculate the probability of fuel defects

    International Nuclear Information System (INIS)

    Tayal, M.; Millen, E.; Sejnoha, R.

    1992-10-01

    In this paper the authors describe the status of a semi-mechanistic approach to the calculation of the probability of fuel defects. This approach expresses the defect probability in terms of fundamental parameters such as local stresses, local strains, and fission product concentration. The calculations of defect probability continue to reflect the influences of the conventional parameters like power ramp, burnup and CANLUB. In addition, the new approach provides a mechanism to account for the impacts of additional factors involving detailed fuel design and reactor operation, for example pellet density, pellet shape and size, sheath diameter and thickness, pellet/sheath clearance, and coolant temperature and pressure. The approach has been validated against a previous empirical correlation. AN illustrative example shows how the defect thresholds are influenced by changes in the internal design of the element and in the coolant pressure. (Author) (7 figs., tab., 12 refs.)

  7. 'PRIZE': A program for calculating collision probabilities in R-Z geometry

    International Nuclear Information System (INIS)

    Pitcher, H.H.W.

    1964-10-01

    PRIZE is an IBM7090 program which computes collision probabilities for systems with axial symmetry and outputs them on cards in suitable format for the PIP1 program. Its method of working, data requirements, output, running time and accuracy are described. The program has been used to compute non-escape (self-collision) probabilities of finite circular cylinders, and a table is given by which non-escape probabilities of slabs, finite and infinite circular cylinders, infinite square cylinders, cubes, spheres and hemispheres may quickly be calculated to 1/2% or better. (author)

  8. Fostering Positive Attitude in Probability Learning Using Graphing Calculator

    Science.gov (United States)

    Tan, Choo-Kim; Harji, Madhubala Bava; Lau, Siong-Hoe

    2011-01-01

    Although a plethora of research evidence highlights positive and significant outcomes of the incorporation of the Graphing Calculator (GC) in mathematics education, its use in the teaching and learning process appears to be limited. The obvious need to revisit the teaching and learning of Probability has resulted in this study, i.e. to incorporate…

  9. Frequency Calculation For Loss Coolant Accident In The Nuclear Reactor

    International Nuclear Information System (INIS)

    Sony, DT

    1996-01-01

    LOCA as initiating event is engineering judgement, because it is rare condition. So, to determine LOCA frequency used be probability and statistic method. By probability and statistic method was estimated from size, weld, age, learning curve and quality, etc. it has been calculated for LOCA frequency in the simplified piping system model, especially estimates from size and weld factors. From calculation, LOCA frequency is 9,82.10 - 6/year

  10. Age replacement policy based on imperfect repair with random probability

    International Nuclear Information System (INIS)

    Lim, J.H.; Qu, Jian; Zuo, Ming J.

    2016-01-01

    In most of literatures of age replacement policy, failures before planned replacement age can be either minimally repaired or perfectly repaired based on the types of failures, cost for repairs and so on. In this paper, we propose age replacement policy based on imperfect repair with random probability. The proposed policy incorporates the case that such intermittent failure can be either minimally repaired or perfectly repaired with random probabilities. The mathematical formulas of the expected cost rate per unit time are derived for both the infinite-horizon case and the one-replacement-cycle case. For each case, we show that the optimal replacement age exists and is finite. - Highlights: • We propose a new age replacement policy with random probability of perfect repair. • We develop the expected cost per unit time. • We discuss the optimal age for replacement minimizing the expected cost rate.

  11. Statistical learning of action: the role of conditional probability.

    Science.gov (United States)

    Meyer, Meredith; Baldwin, Dare

    2011-12-01

    Identification of distinct units within a continuous flow of human action is fundamental to action processing. Such segmentation may rest in part on statistical learning. In a series of four experiments, we examined what types of statistics people can use to segment a continuous stream involving many brief, goal-directed action elements. The results of Experiment 1 showed no evidence for sensitivity to conditional probability, whereas Experiment 2 displayed learning based on joint probability. In Experiment 3, we demonstrated that additional exposure to the input failed to engender sensitivity to conditional probability. However, the results of Experiment 4 showed that a subset of adults-namely, those more successful at identifying actions that had been seen more frequently than comparison sequences-were also successful at learning conditional-probability statistics. These experiments help to clarify the mechanisms subserving processing of intentional action, and they highlight important differences from, as well as similarities to, prior studies of statistical learning in other domains, including language.

  12. Cognitive-psychology expertise and the calculation of the probability of a wrongful conviction.

    Science.gov (United States)

    Rouder, Jeffrey N; Wixted, John T; Christenfeld, Nicholas J S

    2018-05-08

    Cognitive psychologists are familiar with how their expertise in understanding human perception, memory, and decision-making is applicable to the justice system. They may be less familiar with how their expertise in statistical decision-making and their comfort working in noisy real-world environments is just as applicable. Here we show how this expertise in ideal-observer models may be leveraged to calculate the probability of guilt of Gary Leiterman, a man convicted of murder on the basis of DNA evidence. We show by common probability theory that Leiterman is likely a victim of a tragic contamination event rather than a murderer. Making any calculation of the probability of guilt necessarily relies on subjective assumptions. The conclusion about Leiterman's innocence is not overly sensitive to the assumptions-the probability of innocence remains high for a wide range of reasonable assumptions. We note that cognitive psychologists may be well suited to make these calculations because as working scientists they may be comfortable with the role a reasonable degree of subjectivity plays in analysis.

  13. Bayesian probability analysis: a prospective demonstration of its clinical utility in diagnosing coronary disease

    International Nuclear Information System (INIS)

    Detrano, R.; Yiannikas, J.; Salcedo, E.E.; Rincon, G.; Go, R.T.; Williams, G.; Leatherman, J.

    1984-01-01

    One hundred fifty-four patients referred for coronary arteriography were prospectively studied with stress electrocardiography, stress thallium scintigraphy, cine fluoroscopy (for coronary calcifications), and coronary angiography. Pretest probabilities of coronary disease were determined based on age, sex, and type of chest pain. These and pooled literature values for the conditional probabilities of test results based on disease state were used in Bayes theorem to calculate posttest probabilities of disease. The results of the three noninvasive tests were compared for statistical independence, a necessary condition for their simultaneous use in Bayes theorem. The test results were found to demonstrate pairwise independence in patients with and those without disease. Some dependencies that were observed between the test results and the clinical variables of age and sex were not sufficient to invalidate application of the theorem. Sixty-eight of the study patients had at least one major coronary artery obstruction of greater than 50%. When these patients were divided into low-, intermediate-, and high-probability subgroups according to their pretest probabilities, noninvasive test results analyzed by Bayesian probability analysis appropriately advanced 17 of them by at least one probability subgroup while only seven were moved backward. Of the 76 patients without disease, 34 were appropriately moved into a lower probability subgroup while 10 were incorrectly moved up. We conclude that posttest probabilities calculated from Bayes theorem more accurately classified patients with and without disease than did pretest probabilities, thus demonstrating the utility of the theorem in this application

  14. Calculating failure probabilities for TRISO-coated fuel particles using an integral formulation

    International Nuclear Information System (INIS)

    Miller, Gregory K.; Maki, John T.; Knudson, Darrell L.; Petti, David A.

    2010-01-01

    The fundamental design for a gas-cooled reactor relies on the safe behavior of the coated particle fuel. The coating layers surrounding the fuel kernels in these spherical particles, termed the TRISO coating, act as a pressure vessel that retains fission products. The quality of the fuel is reflected in the number of particle failures that occur during reactor operation, where failed particles become a source for fission products that can then diffuse through the fuel element. The failure probability for any batch of particles, which has traditionally been calculated using the Monte Carlo method, depends on statistical variations in design parameters and on variations in the strengths of coating layers among particles in the batch. An alternative approach to calculating failure probabilities is developed herein that uses direct numerical integration of a failure probability integral. Because this is a multiple integral where the statistically varying parameters become integration variables, a fast numerical integration approach is also developed. In sample cases analyzed involving multiple failure mechanisms, results from the integration methods agree closely with Monte Carlo results. Additionally, the fast integration approach, particularly, is shown to significantly improve efficiency of failure probability calculations. These integration methods have been implemented in the PARFUME fuel performance code along with the Monte Carlo method, where each serves to verify accuracy of the others.

  15. SILENE and TDT: A code for collision probability calculations in XY geometries

    International Nuclear Information System (INIS)

    Sanchez, R.; Stankovski, Z.

    1993-01-01

    Collision probability methods are routinely used for cell and assembly multigroup transport calculations in core design tasks. Collision probability methods use a specialized tracking routine to compute neutron trajectories within a given geometric object. These trajectories are then used to generate the appropriate collision matrices in as many groups as required. Traditional tracking routines are based on open-quotes globalclose quotes geometric descriptions (such as regular meshes) and are not able to cope with the geometric detail required in actual core calculations. Therefore, users have to modify their geometry in order to match the geometric model accepted by the tracking routine, introducing thus a modeling error whose evaluation requires the use of a open-quotes referenceclose quotes method. Recently, an effort has been made to develop more flexible tracking routines either by directly adopting tracking Monte Carlo techniques or by coding of complicated geometries. Among these, the SILENE and TDT package is being developed at the Commissariat a l' Energie Atomique to provide routine as well as reference calculations in arbitrarily shaped XY geometries. This package combines a direct graphical acquisition system (SILENE) together with a node-based collision probability code for XY geometries (TDT)

  16. The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.

    Science.gov (United States)

    Youn, Ahrim; Wang, Shuang

    2018-01-01

    Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .

  17. Foundations of probability

    International Nuclear Information System (INIS)

    Fraassen, B.C. van

    1979-01-01

    The interpretation of probabilities in physical theories are considered, whether quantum or classical. The following points are discussed 1) the functions P(μ, Q) in terms of which states and propositions can be represented, are classical (Kolmogoroff) probabilities, formally speaking, 2) these probabilities are generally interpreted as themselves conditional, and the conditions are mutually incompatible where the observables are maximal and 3) testing of the theory typically takes the form of confronting the expectation values of observable Q calculated with probability measures P(μ, Q) for states μ; hence, of comparing the probabilities P(μ, Q)(E) with the frequencies of occurrence of the corresponding events. It seems that even the interpretation of quantum mechanics, in so far as it concerns what the theory says about the empirical (i.e. actual, observable) phenomena, deals with the confrontation of classical probability measures with observable frequencies. This confrontation is studied. (Auth./C.F.)

  18. Toward a generalized probability theory: conditional probabilities

    International Nuclear Information System (INIS)

    Cassinelli, G.

    1979-01-01

    The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalized theory. (Auth.)

  19. Calculation of the exit probability of a particle from a cylinder of matter

    International Nuclear Information System (INIS)

    Ertaud, A.; Mercier, C.

    1949-02-01

    In the elementary calculation of the ε coefficient and of the slowing down length inside a nuclear pile made of a network of cylindrical rods, it is necessary to know the exit probability of a neutron initially located inside a cylinder filled up with a given substance. This probability is the ratio between the number of output neutrons and the number of neutrons produced inside the surface of the cylinder. This report makes the resolution of this probabilistic equation (integral calculation) both for the cylindrical case and for the spherical case. (J.S.)

  20. Probability distribution of magnetization in the one-dimensional Ising model: effects of boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Antal, T [Physics Department, Simon Fraser University, Burnaby, BC V5A 1S6 (Canada); Droz, M [Departement de Physique Theorique, Universite de Geneve, CH 1211 Geneva 4 (Switzerland); Racz, Z [Institute for Theoretical Physics, Eoetvoes University, 1117 Budapest, Pazmany setany 1/a (Hungary)

    2004-02-06

    Finite-size scaling functions are investigated both for the mean-square magnetization fluctuations and for the probability distribution of the magnetization in the one-dimensional Ising model. The scaling functions are evaluated in the limit of the temperature going to zero (T {yields} 0), the size of the system going to infinity (N {yields} {infinity}) while N[1 - tanh(J/k{sub B}T)] is kept finite (J being the nearest neighbour coupling). Exact calculations using various boundary conditions (periodic, antiperiodic, free, block) demonstrate explicitly how the scaling functions depend on the boundary conditions. We also show that the block (small part of a large system) magnetization distribution results are identical to those obtained for free boundary conditions.

  1. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  2. Neutron transport assembly calculation with non-zero net current boundary condition

    International Nuclear Information System (INIS)

    Jo, Chang Keun

    1993-02-01

    Fuel assembly calculation for the homogenized group constants is one of the most important parts in the reactor core analysis. The homogenized group constants of one a quarter assembly are usually generated for the nodal calculation of the reactor core. In the current nodal calculation, one or a quarter of the fuel assembly corresponds to a unit node. The homogenized group constant calculation for a fuel assembly proceeds through cell spectrum calculations, group condensation and cell homogenization calculations, two dimensional fuel assembly calculation, and then depletion calculations of fuel rods. To obtain the assembly wise homogenized group constants, the two dimensional transport calculation is usually performed. Most codes for the assembly wise homogenized group constants employ a zero net current boundary condition. CASMO-3 is such a code that is in wide use. The zero net current boundary condition is plausible and valid in an infinite reactor composed of the same kind of assemblies. However, the reactor is finite and the core is constructed by different kinds of assemblies. Hence, the assumption of the zero net current boundary condition is not valid in the actual reactor. The objective of this study is to develop a homogenization methodology that can treat any actual boundary condition, i.e. non-zero net current boundary condition. In order to treat the non-zero net current boundary condition, we modify CASMO-3. For the two-dimensional treatment in CASMO-3, a multigroup integral transport routine based on the method of transmission probability is used. The code performs assembly calculation with zero net current boundary condition. CASMO-3 is modified to consider the inhomogeneous source at the assembly boundary surface due to the non-zero net current. The modified version of CASMO-3 is called CASMO-3M. CASMO-3M is applied to several benchmark problems. In order to obtain the inhomogeneous source, the global calculation is performed. The local calculation

  3. Smoothing and projecting age-specific probabilities of death by TOPALS

    Directory of Open Access Journals (Sweden)

    Joop de Beer

    2012-10-01

    Full Text Available BACKGROUND TOPALS is a new relational model for smoothing and projecting age schedules. The model is operationally simple, flexible, and transparent. OBJECTIVE This article demonstrates how TOPALS can be used for both smoothing and projecting age-specific mortality for 26 European countries and compares the results of TOPALS with those of other smoothing and projection methods. METHODS TOPALS uses a linear spline to describe the ratios between the age-specific death probabilities of a given country and a standard age schedule. For smoothing purposes I use the average of death probabilities over 15 Western European countries as standard, whereas for projection purposes I use an age schedule of 'best practice' mortality. A partial adjustment model projects how quickly the death probabilities move in the direction of the best-practice level of mortality. RESULTS On average, TOPALS performs better than the Heligman-Pollard model and the Brass relational method in smoothing mortality age schedules. TOPALS can produce projections that are similar to those of the Lee-Carter method, but can easily be used to produce alternative scenarios as well. This article presents three projections of life expectancy at birth for the year 2060 for 26 European countries. The Baseline scenario assumes a continuation of the past trend in each country, the Convergence scenario assumes that there is a common trend across European countries, and the Acceleration scenario assumes that the future decline of death probabilities will exceed that in the past. The Baseline scenario projects that average European life expectancy at birth will increase to 80 years for men and 87 years for women in 2060, whereas the Acceleration scenario projects an increase to 90 and 93 years respectively. CONCLUSIONS TOPALS is a useful new tool for demographers for both smoothing age schedules and making scenarios.

  4. Calculation of Fire Severity Factors and Fire Non-Suppression Probabilities For A DOE Facility Fire PRA

    International Nuclear Information System (INIS)

    Elicson, Tom; Harwood, Bentley; Lucek, Heather; Bouchard, Jim

    2011-01-01

    Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. The fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: Development of time-dependent fire heat release rate profiles (required as input to CFAST), Calculation of fire severity factors based on CFAST detailed fire modeling, and Calculation of fire non-suppression probabilities.

  5. Research advances in probability of causation calculation of radiogenic neoplasms

    International Nuclear Information System (INIS)

    Ning Jing; Yuan Yong; Xie Xiangdong; Yang Guoshan

    2009-01-01

    Probability of causation (PC) was used to facilitate the adjudication of compensation claims for cancers diagnosed following exposure to ionizing radiation. In this article, the excess cancer risk assessment models used for PC calculation are reviewed. Cancer risk transfer models between different populations, dependence of cancer risk on dose and dose rate, modification by epidemiological risk factors and application of PC are also discussed in brief. (authors)

  6. Dental age estimation: the role of probability estimates at the 10 year threshold.

    Science.gov (United States)

    Lucas, Victoria S; McDonald, Fraser; Neil, Monica; Roberts, Graham

    2014-08-01

    The use of probability at the 18 year threshold has simplified the reporting of dental age estimates for emerging adults. The availability of simple to use widely available software has enabled the development of the probability threshold for individual teeth in growing children. Tooth development stage data from a previous study at the 10 year threshold were reused to estimate the probability of developing teeth being above or below the 10 year thresh-hold using the NORMDIST Function in Microsoft Excel. The probabilities within an individual subject are averaged to give a single probability that a subject is above or below 10 years old. To test the validity of this approach dental panoramic radiographs of 50 female and 50 male children within 2 years of the chronological age were assessed with the chronological age masked. Once the whole validation set of 100 radiographs had been assessed the masking was removed and the chronological age and dental age compared. The dental age was compared with chronological age to determine whether the dental age correctly or incorrectly identified a validation subject as above or below the 10 year threshold. The probability estimates correctly identified children as above or below on 94% of occasions. Only 2% of the validation group with a chronological age of less than 10 years were assigned to the over 10 year group. This study indicates the very high accuracy of assignment at the 10 year threshold. Further work at other legally important age thresholds is needed to explore the value of this approach to the technique of age estimation. Copyright © 2014. Published by Elsevier Ltd.

  7. 'PRIZE': A program for calculating collision probabilities in R-Z geometry

    Energy Technology Data Exchange (ETDEWEB)

    Pitcher, H.H.W. [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1964-10-15

    PRIZE is an IBM7090 program which computes collision probabilities for systems with axial symmetry and outputs them on cards in suitable format for the PIP1 program. Its method of working, data requirements, output, running time and accuracy are described. The program has been used to compute non-escape (self-collision) probabilities of finite circular cylinders, and a table is given by which non-escape probabilities of slabs, finite and infinite circular cylinders, infinite square cylinders, cubes, spheres and hemispheres may quickly be calculated to 1/2% or better. (author)

  8. Decomposition of conditional probability for high-order symbolic Markov chains

    Science.gov (United States)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  9. Probability Issues in without Replacement Sampling

    Science.gov (United States)

    Joarder, A. H.; Al-Sabah, W. S.

    2007-01-01

    Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…

  10. Calculation of cranial nerve complication probability for acoustic neuroma radiosurgery

    International Nuclear Information System (INIS)

    Meeks, Sanford L.; Buatti, John M.; Foote, Kelly D.; Friedman, William A.; Bova, Francis J.

    2000-01-01

    Purpose: Estimations of complications from stereotactic radiosurgery usually rely simply on dose-volume or dose-diameter isoeffect curves. Due to the sparse clinical data available, these curves have typically not considered the target location in the brain, target histology, or treatment plan conformality as parameters in the calculation. In this study, a predictive model was generated to estimate the probability of cranial neuropathies as a result of acoustic schwannoma radiosurgery. Methods and Materials: The dose-volume histogram reduction scheme was used to calculate the normal tissue complication probability (NTCP) from brainstem dose-volume histograms. The model's fitting parameters were optimized to provide the best fit to the observed complication data for acoustic neuroma patients treated with stereotactic radiosurgery at the University of Florida. The calculation was then applied to the remainder of the patients in the database. Results: The best fit to our clinical data was obtained using n = 0.04, m = 0.15, and no. alphano. /no. betano. = 2.1 Gy -1 . Although the fitting parameter m is relatively consistent with ranges found in the literature, both the volume parameter, n, and no. alphano. /no. betano. are much smaller than the values quoted in the literature. The fit to our clinical data indicates that brainstem, or possibly a specific portion of the brainstem, is more radiosensitive than the parameters in the literature indicate, and that there is very little volume effect; in other words, irradiation of a small fraction of the brainstem yields NTCPs that are nearly as high as those calculated for entire volume irradiation. These new fitting parameters are specific to acoustic neuroma radiosurgery, and the small volume effect that we observe may be an artifact of the fixed relationship of acoustic tumors to specific regions of the brainstem. Applying the model to our patient database, we calculate an average NTCP of 7.2% for patients who had no

  11. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , S st , p st ) for stochastic uncertainty, a probability space (S su , S su , p su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , S st , p st ) and (S su , S su , p su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  12. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-01-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , L st , P st ) for stochastic uncertainty, a probability space (S su , L su , P su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , L st , P st ) and (S su , L su , P su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  13. [Biometric bases: basic concepts of probability calculation].

    Science.gov (United States)

    Dinya, E

    1998-04-26

    The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

  14. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  15. Calculation of probabilities of rotational transitions of two-atom molecules in the collision with heavy particles

    International Nuclear Information System (INIS)

    Vargin, A.N.; Ganina, N.A.; Konyukhov, V.K.; Selyakov, V.I.

    1975-01-01

    The problem of calculation of collisional probabilities of rotational transitions (CPRT) in molecule-molecule and molecule-atom interactions in a three-dimensional space has been solved in this paper. A quasiclassical approach was used. The calculation of collisional probabilities of rotational transitions trajectory was carried out in the following way. The particle motion trajectory was calculated by a classical method and the time dependence of the perturbation operator was obtained, its averaging over wave functions of initial and finite states produced CPRT. The classical calculation of the molecule motion trajectory was justified by triviality of the de Broglie wavelength, compared with characteristic atomic distances, and by triviality of a transfered rotational quantum compared with the energy of translational motion of particles. The results of calculation depend on the chosen interaction potential of collisional particles. It follows from the Messy criterion that the region of nonadiabaticity of interaction may be compared with internuclear distances of a molecule. Therefore, for the description of the interaction a short-range potential is required. Analytical expressions were obtained appropriate for practical calculations for one- and two-quantum rotational transitions of diatomic molecules. The CPRT was averaged over the Maxwell distribution over velocities and analytical dependences on a gas temperature were obtained. The results of the numerical calculation of probabilities for the HCl-HCl, HCl-He, CO-CO interactions are presented to illustrate the method

  16. Linear positivity and virtual probability

    International Nuclear Information System (INIS)

    Hartle, James B.

    2004-01-01

    We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics

  17. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  18. Experiencing El Niño conditions during early life reduces recruiting probabilities but not adult survival

    Science.gov (United States)

    Rodríguez, Cristina; Drummond, Hugh

    2018-01-01

    In wild long-lived animals, analysis of impacts of stressful natal conditions on adult performance has rarely embraced the entire age span, and the possibility that costs are expressed late in life has seldom been examined. Using 26 years of data from 8541 fledglings and 1310 adults of the blue-footed booby (Sula nebouxii), a marine bird that can live up to 23 years, we tested whether experiencing the warm waters and food scarcity associated with El Niño in the natal year reduces recruitment or survival over the adult lifetime. Warm water in the natal year reduced the probability of recruiting; each additional degree (°C) of water temperature meant a reduction of roughly 50% in fledglings' probability of returning to the natal colony as breeders. Warm water in the current year impacted adult survival, with greater effect at the oldest ages than during early adulthood. However, warm water in the natal year did not affect survival at any age over the adult lifespan. A previous study showed that early recruitment and widely spaced breeding allow boobies that experience warm waters in the natal year to achieve normal fledgling production over the first 10 years; our results now show that this reproductive effort incurs no survival penalty, not even late in life. This pattern is additional evidence of buffering against stressful natal conditions via life-history adjustments. PMID:29410788

  19. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States)

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S{sub st}, S{sub st}, p{sub st}) for stochastic uncertainty, a probability space (S{sub su}, S{sub su}, p{sub su}) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S{sub st}, S{sub st}, p{sub st}) and (S{sub su}, S{sub su}, p{sub su}). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency`s standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems.

  20. Development and Validation of a Calculator for Estimating the Probability of Urinary Tract Infection in Young Febrile Children.

    Science.gov (United States)

    Shaikh, Nader; Hoberman, Alejandro; Hum, Stephanie W; Alberty, Anastasia; Muniz, Gysella; Kurs-Lasky, Marcia; Landsittel, Douglas; Shope, Timothy

    2018-06-01

    Accurately estimating the probability of urinary tract infection (UTI) in febrile preverbal children is necessary to appropriately target testing and treatment. To develop and test a calculator (UTICalc) that can first estimate the probability of UTI based on clinical variables and then update that probability based on laboratory results. Review of electronic medical records of febrile children aged 2 to 23 months who were brought to the emergency department of Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania. An independent training database comprising 1686 patients brought to the emergency department between January 1, 2007, and April 30, 2013, and a validation database of 384 patients were created. Five multivariable logistic regression models for predicting risk of UTI were trained and tested. The clinical model included only clinical variables; the remaining models incorporated laboratory results. Data analysis was performed between June 18, 2013, and January 12, 2018. Documented temperature of 38°C or higher in children aged 2 months to less than 2 years. With the use of culture-confirmed UTI as the main outcome, cutoffs for high and low UTI risk were identified for each model. The resultant models were incorporated into a calculation tool, UTICalc, which was used to evaluate medical records. A total of 2070 children were included in the study. The training database comprised 1686 children, of whom 1216 (72.1%) were female and 1167 (69.2%) white. The validation database comprised 384 children, of whom 291 (75.8%) were female and 200 (52.1%) white. Compared with the American Academy of Pediatrics algorithm, the clinical model in UTICalc reduced testing by 8.1% (95% CI, 4.2%-12.0%) and decreased the number of UTIs that were missed from 3 cases to none. Compared with empirically treating all children with a leukocyte esterase test result of 1+ or higher, the dipstick model in UTICalc would have reduced the number of treatment delays by 10.6% (95% CI

  1. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors

  2. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  3. The use of collision probabilities in calculations for light water reactors

    International Nuclear Information System (INIS)

    Janse van Rensburg, J.

    1984-01-01

    A procedure is developed to prepare representative two-group neutron data for fuel elements of pressurized water reactors. This procedure is based on the method of collision probabilities and this theory is completely derived and implemented. The computer code, CLUPCO, which is developed for this purpose, is briefly discussed. The accuracy of the method is compared with other established calculational methods by means of experimental results. 30 figs., 29 tabs., 71 refs

  4. Conditional Probability Analysis: A Statistical Tool for Environmental Analysis.

    Science.gov (United States)

    The use and application of environmental conditional probability analysis (CPA) is relatively recent. The first presentation using CPA was made in 2002 at the New England Association of Environmental Biologists Annual Meeting in Newport. Rhode Island. CPA has been used since the...

  5. Comparison of clinical probability-adjusted D-dimer and age-adjusted D-dimer interpretation to exclude venous thromboembolism.

    Science.gov (United States)

    Takach Lapner, Sarah; Julian, Jim A; Linkins, Lori-Ann; Bates, Shannon; Kearon, Clive

    2017-10-05

    Two new strategies for interpreting D-dimer results have been proposed: i) using a progressively higher D-dimer threshold with increasing age (age-adjusted strategy) and ii) using a D-dimer threshold in patients with low clinical probability that is twice the threshold used in patients with moderate clinical probability (clinical probability-adjusted strategy). Our objective was to compare the diagnostic accuracy of age-adjusted and clinical probability-adjusted D-dimer interpretation in patients with a low or moderate clinical probability of venous thromboembolism (VTE). We performed a retrospective analysis of clinical data and blood samples from two prospective studies. We compared the negative predictive value (NPV) for VTE, and the proportion of patients with a negative D-dimer result, using two D-dimer interpretation strategies: the age-adjusted strategy, which uses a progressively higher D-dimer threshold with increasing age over 50 years (age in years × 10 µg/L FEU); and the clinical probability-adjusted strategy which uses a D-dimer threshold of 1000 µg/L FEU in patients with low clinical probability and 500 µg/L FEU in patients with moderate clinical probability. A total of 1649 outpatients with low or moderate clinical probability for a first suspected deep vein thrombosis or pulmonary embolism were included. The NPV of both the clinical probability-adjusted strategy (99.7 %) and the age-adjusted strategy (99.6 %) were similar. However, the proportion of patients with a negative result was greater with the clinical probability-adjusted strategy (56.1 % vs, 50.9 %; difference 5.2 %; 95 % CI 3.5 % to 6.8 %). These findings suggest that clinical probability-adjusted D-dimer interpretation is a better way of interpreting D-dimer results compared to age-adjusted interpretation.

  6. Improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell

    International Nuclear Information System (INIS)

    Bosevski, T.

    1986-01-01

    An improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell has been developed. Expanding the neutron flux and source into a series of even powers of the radius, one' gets a convenient method for integration of the one-energy group integral transport equation. It is shown that it is possible to perform an analytical integration in the x-y plane in one variable and to use the effective Gaussian integration over another one. Choosing a convenient distribution of space points in fuel and moderator the transport matrix calculation and cell reaction rate integration were condensed. On the basis of the proposed method, the computer program DISKRET for the ZUSE-Z 23 K computer has been written. The suitability of the proposed method for the calculation of the thermal-neutron-flux distribution in a reactor cell can be seen from the test results obtained. Compared with the other collision probability methods, the proposed treatment excels with a mathematical simplicity and a faster convergence. (author)

  7. Jet identification based on probability calculations using Bayes' theorem

    International Nuclear Information System (INIS)

    Jacobsson, C.; Joensson, L.; Lindgren, G.; Nyberg-Werther, M.

    1994-11-01

    The problem of identifying jets at LEP and HERA has been studied. Identification using jet energies and fragmentation properties was treated separately in order to investigate the degree of quark-gluon separation that can be achieved by either of these approaches. In the case of the fragmentation-based identification, a neural network was used, and a test of the dependence on the jet production process and the fragmentation model was done. Instead of working with the separation variables directly, these have been used to calculate probabilities of having a specific type of jet, according to Bayes' theorem. This offers a direct interpretation of the performance of the jet identification and provides a simple means of combining the results of the energy- and fragmentation-based identifications. (orig.)

  8. Integral transport multiregion geometrical shadowing factor for the approximate collision probability matrix calculation of infinite closely packed lattices

    International Nuclear Information System (INIS)

    Jowzani-Moghaddam, A.

    1981-01-01

    An integral transport method of calculating the geometrical shadowing factor in multiregion annular cells for infinite closely packed lattices in cylindrical geometry is developed. This analytical method has been programmed in the TPGS code. This method is based upon a consideration of the properties of the integral transport method for a nonuniform body, which together with Bonalumi's approximations allows the determination of the approximate multiregion collision probability matrix for infinite closely packed lattices with sufficient accuracy. The multiregion geometrical shadowing factors have been calculated for variations in fuel pin annular segment rings in a geometry of annular cells. These shadowing factors can then be used in the calculation of neutron transport from one annulus to another in an infinite lattice. The result of this new geometrical shadowing and collision probability matrix are compared with the Dancoff-Ginsburg correction and the probability matrix using constant shadowing on Yankee fuel elements in an infinite lattice. In these cases the Dancoff-Ginsburg correction factor and collision probability matrix using constant shadowing are in difference by at most 6.2% and 6%, respectively

  9. Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes

    International Nuclear Information System (INIS)

    Hebert, Alain; Coste, Mireille

    2002-01-01

    As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented

  10. Calculating Absolute Transition Probabilities for Deformed Nuclei in the Rare-Earth Region

    Science.gov (United States)

    Stratman, Anne; Casarella, Clark; Aprahamian, Ani

    2017-09-01

    Absolute transition probabilities are the cornerstone of understanding nuclear structure physics in comparison to nuclear models. We have developed a code to calculate absolute transition probabilities from measured lifetimes, using a Python script and a Mathematica notebook. Both of these methods take pertinent quantities such as the lifetime of a given state, the energy and intensity of the emitted gamma ray, and the multipolarities of the transitions to calculate the appropriate B(E1), B(E2), B(M1) or in general, any B(σλ) values. The program allows for the inclusion of mixing ratios of different multipolarities and the electron conversion of gamma-rays to correct for their intensities, and yields results in absolute units or results normalized to Weisskopf units. The code has been tested against available data in a wide range of nuclei from the rare earth region (28 in total), including 146-154Sm, 154-160Gd, 158-164Dy, 162-170Er, 168-176Yb, and 174-182Hf. It will be available from the Notre Dame Nuclear Science Laboratory webpage for use by the community. This work was supported by the University of Notre Dame College of Science, and by the National Science Foundation, under Contract PHY-1419765.

  11. Time-dependent earthquake probability calculations for southern Kanto after the 2011 M9.0 Tohoku earthquake

    Science.gov (United States)

    Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.

    2013-05-01

    Seismicity in southern Kanto activated with the 2011 March 11 Tohoku earthquake of magnitude M9.0, but does this cause a significant difference in the probability of more earthquakes at the present or in the To? future answer this question, we examine the effect of a change in the seismicity rate on the probability of earthquakes. Our data set is from the Japan Meteorological Agency earthquake catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent earthquake probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well modelled by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of earthquakes is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku earthquake. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.

  12. Calculation of ruin probabilities for a dense class of heavy tailed distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis; Samorodnitsky, Gennady

    2015-01-01

    In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximate...... any such distribution. We prove that formulas from renewal theory, and with a particular attention to ruin probabilities, which are true for common phase-type distributions also hold true for the infinite-dimensional case. We provide algorithms for calculating functionals of interest...... such as the renewal density and the ruin probability. It might be of interest to approximate a given heavy tailed distribution of some other type by a distribution from the class of infinite-dimensional phase-type distributions and to this end we provide a calibration procedure which works for the approximation...

  13. Calculation of the tunneling time using the extended probability of the quantum histories approach

    International Nuclear Information System (INIS)

    Rewrujirek, Jiravatt; Hutem, Artit; Boonchui, Sutee

    2014-01-01

    The dwell time of quantum tunneling has been derived by Steinberg (1995) [7] as a function of the relation between transmission and reflection times τ t and τ r , weighted by the transmissivity and the reflectivity. In this paper, we reexamine the dwell time using the extended probability approach. The dwell time is calculated as the weighted average of three mutually exclusive events. We consider also the scattering process due to a resonance potential in the long-time limit. The results show that the dwell time can be expressed as the weighted sum of transmission, reflection and internal probabilities.

  14. Probability Density Estimation Using Neural Networks in Monte Carlo Calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo

    2008-01-01

    The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)

  15. Calculation of accurate albedo boundary conditions for three-dimensional nodal diffusion codes by the method of characteristics

    International Nuclear Information System (INIS)

    Petkov, Petko T.

    2000-01-01

    Most of the few-group three-dimensional nodal diffusion codes used for neutronics calculations of the WWER reactors use albedo type boundary conditions on the core-reflector boundary. The conventional albedo are group-to-group reflection probabilities, defined on each outer node face. The method of characteristics is used to calculate accurate albedo by the following procedure. A many-group two-dimensional heterogeneous core-reflector problem, including a sufficient part of the core and detailed description of the adjacent reflector, is solved first. From this solution the angular flux on the core-reflector boundary is calculated in all groups for all traced neutron directions. Accurate boundary conditions can be calculated for the radial, top and bottom reflectors as well as for the absorber part of the WWER-440 reactor control assemblies. The algorithm can be used to estimate also albedo, coupling outer node faces on the radial reflector in the axial direction. Numerical results for the WWER-440 reactor are presented. (Authors)

  16. Calculation for laser-produced plasmas conditions of thin middle-Z targets: Pt.I

    International Nuclear Information System (INIS)

    Peng Huimin; Zhang Guoping; Sheng Jiatian; Shao Yunfeng; Zhang Yinchun

    1988-01-01

    An one-dimentional non-LTE laser irradiated code was used to simulate the laser-produced plasmas conditions of thin middle Z targets with high intensities (about 10 13 W/cm 2 ) irradiation. Following physical processes are considered: bremsstrahlung, radiative ionization, collisional ionization by electrons and their inverse processes, Compton scattering. Fokker-Planck approximtaion is used in Compton scattering; the thermal flux limits are taken for electrons and ions in the calculating, and the multigroup flux-limited diffusion approximation is taken for the radiative transport equations. The average-atom model is used to calculate the population probabilities of atoms. Laser absorption via inverse bremsstrahlung is considered to be the most important in the simulation. Using laser beams with intensities 5 x 10 13 W/cm 2 and 1 x 10 14 W/cm 2 , λ L = 0.53 μm, τ = 450 ps to irradiate thin Se target from single-side and double-sides separately, the computational results for laser-produced plasmas conditions are well agree with experimental results

  17. Utilization of transmission probabilities in the calculation of unit-cell by the interface-current method

    International Nuclear Information System (INIS)

    Queiroz Bogado Leite, S. de.

    1989-10-01

    A widely used but otherwise physically incorrect assumption in unit-cell calculations by the method of interface currents in cylindrical or spherical geometries, is that of that of isotropic fluxes at the surfaces of the cell annular regions, when computing transmission probabilities. In this work, new interface-current relations are developed without making use of this assumption and the effects on calculated integral parameters are shown for an idealized unit-cell example. (author) [pt

  18. Using the probability method for multigroup calculations of reactor cells in a thermal energy range

    International Nuclear Information System (INIS)

    Rubin, I.E.; Pustoshilova, V.S.

    1984-01-01

    The possibility of using the transmission probability method with performance inerpolation for determining spatial-energy neutron flux distribution in cells of thermal heterogeneous reactors is considered. The results of multigroup calculations of several uranium-water plane and cylindrical cells with different fuel enrichment in a thermal energy range are given. A high accuracy of results is obtained with low computer time consumption. The use of the transmission probability method is particularly reasonable in algorithms of the programmes compiled computer with significant reserve of internal memory

  19. [CALCULATION OF THE PROBABILITY OF METALS INPUT INTO AN ORGANISM WITH DRINKING POTABLE WATERS].

    Science.gov (United States)

    Tunakova, Yu A; Fayzullin, R I; Valiev, V S

    2015-01-01

    The work was performed in framework of the State program for the improvement of the competitiveness of Kazan (Volga) Federal University among the world's leading research and education centers and subsidies unveiled to Kazan Federal University to perform public tasks in the field of scientific research. In the current methodological recommendations "Guide for assessing the risk to public health under the influence of chemicals that pollute the environment," P 2.1.10.1920-04 there is regulated the determination of quantitative and/or qualitative characteristics of the harmful effects to human health from exposure to environmental factors. We proposed to complement the methodological approaches presented in P 2.1.10.1920-04, with the estimation of the probability of pollutants input in the body with drinking water which is the greater, the higher the order of the excess of the actual concentrations of the substances in comparison with background concentrations. In the paper there is proposed a method of calculation of the probability of exceeding the actual concentrations of metal cations above the background in samples of drinking water consumed by the population, which were selected at the end points of consumption in houses and apartments, to accommodate the passage of secondary pollution ofwater pipelines and distributing paths. Research was performed on the example of Kazan, divided into zones. The calculation of probabilities was made with the use of Bayes' theorem.

  20. Stress Calculation of a TRISO Coated Particle Fuel by Using a Poisson's Ratio in Creep Condition

    International Nuclear Information System (INIS)

    Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.; Kim, W. K.

    2007-01-01

    KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) project since 2004, has been developing a performance analysis code for the TRISO coated particle fuel named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. KAERI, on the other hand, is developing an ABAQUS based finite element(FE) model to cover the non-linear behaviors of a coated particle fuel such as cracking or debonding of the TRISO coating layers. Using the ABAQUS based FE model, verification calculations were carried out for the IAEA CRP-6 benchmark problems involving creep, swelling, and pressure. However, in this model the Poisson's ratio for elastic solution was used for creep strain calculation. In this study, an improvement is made for the ABAQUS based finite element model by using the Poisson's ratio in creep condition for the calculation of the creep strain rate. As a direct input of the coefficient in a creep condition is impossible, a user subroutine for the ABAQUS solution is prepared in FORTRAN for use in the calculations of the creep strain of the coating layers in the radial and hoop directions of the spherical fuel. This paper shows the calculation results of a TRISO coated particle fuel subject to an irradiation condition assumed as in the Miller's publication in comparison with the results obtained from the old FE model used in the CRP-6 benchmark calculations

  1. Fiber Bragg Gratings, IT Techniques and Strain Gauge Validation for Strain Calculation on Aged Metal Specimens

    Directory of Open Access Journals (Sweden)

    Ander Montero

    2011-01-01

    Full Text Available This paper studies the feasibility of calculating strains in aged F114 steel specimens with Fiber Bragg Grating (FBG sensors and infrared thermography (IT techniques. Two specimens have been conditioned under extreme temperature and relative humidity conditions making comparative tests of stress before and after aging using different adhesives. Moreover, a comparison has been made with IT techniques and conventional methods for calculating stresses in F114 steel. Implementation of Structural Health Monitoring techniques on real aircraft during their life cycle requires a study of the behaviour of FBG sensors and their wiring under real conditions, before using them for a long time. To simulate aging, specimens were stored in a climate chamber at 70 °C and 90% RH for 60 days. This study is framed within the Structural Health Monitoring (SHM and Non Destructuve Evaluation (NDE research lines, integrated into the avionics area maintained by the Aeronautical Technologies Centre (CTA and the University of the Basque Country (UPV/EHU.

  2. Calculation of the exit probability of a particle from a cylinder of matter; Calcul de la probabilite de sortie d'une particule d'un cylindre de matiere

    Energy Technology Data Exchange (ETDEWEB)

    Ertaud, A; Mercier, C

    1949-02-01

    In the elementary calculation of the {epsilon} coefficient and of the slowing down length inside a nuclear pile made of a network of cylindrical rods, it is necessary to know the exit probability of a neutron initially located inside a cylinder filled up with a given substance. This probability is the ratio between the number of output neutrons and the number of neutrons produced inside the surface of the cylinder. This report makes the resolution of this probabilistic equation (integral calculation) both for the cylindrical case and for the spherical case. (J.S.)

  3. Probability of an Abnormal Screening PSA Result Based on Age, Race, and PSA Threshold

    Science.gov (United States)

    Espaldon, Roxanne; Kirby, Katharine A.; Fung, Kathy Z.; Hoffman, Richard M.; Powell, Adam A.; Freedland, Stephen J.; Walter, Louise C.

    2014-01-01

    Objective To determine the distribution of screening PSA values in older men and how different PSA thresholds affect the proportion of white, black, and Latino men who would have an abnormal screening result across advancing age groups. Methods We used linked national VA and Medicare data to determine the value of the first screening PSA test (ng/mL) of 327,284 men age 65+ who underwent PSA screening in the VA healthcare system in 2003. We calculated the proportion of men with an abnormal PSA result based on age, race, and common PSA thresholds. Results Among men age 65+, 8.4% had a PSA >4.0ng/mL. The percentage of men with a PSA >4.0ng/mL increased with age and was highest in black men (13.8%) versus white (8.0%) or Latino men (10.0%) (PPSA >4.0ng/mL ranged from 5.1% of Latino men age 65–69 to 27.4% of black men age 85+. Raising the PSA threshold from >4.0ng/mL to >10.0ng/mL, reclassified the greatest percentage of black men age 85+ (18.3% absolute change) and the lowest percentage of Latino men age 65–69 (4.8% absolute change) as being under the biopsy threshold (PPSA threshold together affect the pre-test probability of an abnormal screening PSA result. Based on screening PSA distributions, stopping screening among men whose PSA 10ng/ml has the greatest effect on reducing the number of older black men who will face biopsy decisions after screening. PMID:24439009

  4. Impact probabilities of meteoroid streams with artificial satellites: An assessment

    International Nuclear Information System (INIS)

    Foschini, L.; Cevolani, G.

    1997-01-01

    Impact probabilities of artificial satellites with meteoroid streams were calculated using data collected with the CNR forward scatter (FS) bistatic radar over the Bologna-Lecce baseline (about 700 km). Results show that impact probabilities are 2 times higher than other previously calculated values. Nevertheless, although catastrophic impacts are still rare even in the case of meteor storm conditions, it is expected that high meteoroid fluxes can erode satellites surfaces and weaken their external structures

  5. The limiting conditional probability distribution in a stochastic model of T cell repertoire maintenance.

    Science.gov (United States)

    Stirk, Emily R; Lythe, Grant; van den Berg, Hugo A; Hurst, Gareth A D; Molina-París, Carmen

    2010-04-01

    The limiting conditional probability distribution (LCD) has been much studied in the field of mathematical biology, particularly in the context of epidemiology and the persistence of epidemics. However, it has not yet been applied to the immune system. One of the characteristic features of the T cell repertoire is its diversity. This diversity declines in old age, whence the concepts of extinction and persistence are also relevant to the immune system. In this paper we model T cell repertoire maintenance by means of a continuous-time birth and death process on the positive integers, where the origin is an absorbing state. We show that eventual extinction is guaranteed. The late-time behaviour of the process before extinction takes place is modelled by the LCD, which we prove always exists for the process studied here. In most cases, analytic expressions for the LCD cannot be computed but the probability distribution may be approximated by means of the stationary probability distributions of two related processes. We show how these approximations are related to the LCD of the original process and use them to study the LCD in two special cases. We also make use of the large N expansion to derive a further approximation to the LCD. The accuracy of the various approximations is then analysed. (c) 2009 Elsevier Inc. All rights reserved.

  6. THE CALCULATION OF FAST-NEUTRON ATTENUATION PROBABILITIES THROUGH A NINE- INCH POLYETHYLENE SLAB AND COMPARISON WITH EXPERIMENTAL DATA

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, L. G.

    1963-06-15

    Calculations of neutron penetration probabilities were performed to evaluate the Monte Carlo Multilayer Slab Penetration Procedure. A 9-in. polyethylene alab was chosen for the calculations and results were compared with experimental data. The calculated and measured dose rates agree within 20% for all exit polar angles. The calculations indicate that incident neutrons with energies less than 2.5 Mev do not contribute significantly to the transmitted dose rate. (auth)

  7. Using Dynamic Geometry Software for Teaching Conditional Probability with Area-Proportional Venn Diagrams

    Science.gov (United States)

    Radakovic, Nenad; McDougall, Douglas

    2012-01-01

    This classroom note illustrates how dynamic visualization can be used to teach conditional probability and Bayes' theorem. There are two features of the visualization that make it an ideal pedagogical tool in probability instruction. The first feature is the use of area-proportional Venn diagrams that, along with showing qualitative relationships,…

  8. NOx emission calculations for bulk carriers by using engine power probabilities as weighting factors.

    Science.gov (United States)

    Cheng, Chih-Wen; Hua, Jian; Hwang, Daw-Shang

    2017-10-01

    An important marine pollution issue identified by the International Maritime Organization (IMO) is NO x emissions; however, the stipulated method for determining the NO x certification value does not reflect the actual high emission factors of slow-speed two-stroke diesel engines over long-term slow steaming. In this study, an accurate method is presented for calculating the NO x emission factors and total amount of NO x emissions by using the actual power probabilities of the diesel engines in four types of bulk carriers. The proposed method is suitable for all types and purposes of diesel engines, is not restricted to any operating modes, and is highly accurate. Moreover, it is recommended that the IMO-stipulated certification value calculation method be modified accordingly to genuinely reduce the amount of NO x emissions. The successful achievement of this level of reduction will help improve the air quality, especially in coastal and port areas, and the health of local residents. As per the IMO, the NO x emission certification value of marine diesel engines having a rated power over 130 kW must be obtained using specified weighting factor (WF)-based calculation. However, this calculation fails to represent the current actual situation. Effective emission reductions of 6.91% (at sea) and 31.9% (in ports) were achieved using a mathematical model of power probability functions. Thus, we strongly recommend amending the certification value of NO x Technical Code 2008 (NTC 2008) by removing the WF constraints, such that the NO x emissions of diesel engines is lower than the Tier-limits at any load level to obtain genuine NO x emission reductions.

  9. Class dependency of fuzzy relational database using relational calculus and conditional probability

    Science.gov (United States)

    Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya

    2018-03-01

    In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.

  10. Calculation of the probability of overlapping one family of nuclear levels with resonances of an independent family

    International Nuclear Information System (INIS)

    Difilippo, F.C.

    1982-01-01

    Calculations of the resonance integrals of particular isotopes in a mixture of isotopes show that the overlapping of the resonances of one isotope by resonances of other isotopes affects the final values of effective cross sections. The same effect might adversely influence those nondestructive techniques which assay fissile materials on the basis of resonance effects. Of relevance for these applications is the knowledge of the probability of overlapping resonances of a family of nuclear levels (class 1) with resonances of an independent family (class 2). For the sequence of class 1 resonances we calculate the probability distribution, p(delta), to find a class 2, first-neighbor resonance at distance (in energy) delta from a class 1 resonance; integration of p(delta) over the average finite width of the resonances would give the aforementioned probability of overlapping. Because a class 1 resonance can have a class 1 or a class 2 resonance as a first neighbor, the resultant p(delta) is not given by the distribution of spacings of the composite family

  11. User’s guide for MapMark4—An R package for the probability calculations in three-part mineral resource assessments

    Science.gov (United States)

    Ellefsen, Karl J.

    2017-06-27

    MapMark4 is a software package that implements the probability calculations in three-part mineral resource assessments. Functions within the software package are written in the R statistical programming language. These functions, their documentation, and a copy of this user’s guide are bundled together in R’s unit of shareable code, which is called a “package.” This user’s guide includes step-by-step instructions showing how the functions are used to carry out the probability calculations. The calculations are demonstrated using test data, which are included in the package.

  12. Problems involved in calculating the probability of rare occurrences

    International Nuclear Information System (INIS)

    Tittes, E.

    1986-01-01

    Also with regard to the characteristics such as occurrence probability or occurrence rate, there are limits which have to be observed, or else probability data and thus the concept of determinable risk itself will lose its practical value. The mathematical models applied for probability assessment are based on data supplied by the insurance companies, reliability experts in the automobile industry, or by planning experts in the field of traffic or information supply. (DG) [de

  13. Conditional probability of intense rainfall producing high ground concentrations from radioactive plumes

    International Nuclear Information System (INIS)

    Wayland, J.R.

    1977-03-01

    The overlap of the expanding plume of radioactive material from a hypothetical nuclear accident with rainstorms over dense population areas is considered. The conditional probability of the occurrence of hot spots from intense cellular rainfall is presented

  14. The effect of conditional probability of chord progression on brain response: an MEG study.

    Directory of Open Access Journals (Sweden)

    Seung-Goo Kim

    Full Text Available BACKGROUND: Recent electrophysiological and neuroimaging studies have explored how and where musical syntax in Western music is processed in the human brain. An inappropriate chord progression elicits an event-related potential (ERP component called an early right anterior negativity (ERAN or simply an early anterior negativity (EAN in an early stage of processing the musical syntax. Though the possible underlying mechanism of the EAN is assumed to be probabilistic learning, the effect of the probability of chord progressions on the EAN response has not been previously explored explicitly. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, the empirical conditional probabilities in a Western music corpus were employed as an approximation of the frequencies in previous exposure of participants. Three types of chord progression were presented to musicians and non-musicians in order to examine the correlation between the probability of chord progression and the neuromagnetic response using magnetoencephalography (MEG. Chord progressions were found to elicit early responses in a negatively correlating fashion with the conditional probability. Observed EANm (as a magnetic counterpart of the EAN component responses were consistent with the previously reported EAN responses in terms of latency and location. The effect of conditional probability interacted with the effect of musical training. In addition, the neural response also correlated with the behavioral measures in the non-musicians. CONCLUSIONS/SIGNIFICANCE: Our study is the first to reveal the correlation between the probability of chord progression and the corresponding neuromagnetic response. The current results suggest that the physiological response is a reflection of the probabilistic representations of the musical syntax. Moreover, the results indicate that the probabilistic representation is related to the musical training as well as the sensitivity of an individual.

  15. Individual quality and age but not environmental or social conditions modulate costs of reproduction in a capital breeder.

    Science.gov (United States)

    Debeffe, Lucie; Poissant, Jocelyn; McLoughlin, Philip D

    2017-08-01

    Costs associated with reproduction are widely known to play a role in the evolution of reproductive tactics with consequences to population and eco-evolutionary dynamics. Evaluating these costs as they pertain to species in the wild remains an important goal of evolutionary ecology. Individual heterogeneity, including differences in individual quality (i.e., among-individual differences in traits associated with survival and reproduction) or state, and variation in environmental and social conditions can modulate the costs of reproduction; however, few studies have considered effects of these factors simultaneously. Taking advantage of a detailed, long-term dataset for a population of feral horses (Sable Island, Nova Scotia, Canada), we address the question of how intrinsic (quality, age), environmental (winter severity, location), and social conditions (group size, composition, sex ratio, density) influence the costs of reproduction on subsequent reproduction. Individual quality was measured using a multivariate analysis on a combination of four static and dynamic traits expected to depict heterogeneity in individual performance. Female quality and age interacted with reproductive status of the previous year to determine current reproductive effort, while no effect of social or environmental covariates was found. High-quality females showed higher probabilities of giving birth and weaning their foal regardless of their reproductive status the previous year, while those of lower quality showed lower probabilities of producing foals in successive years. Middle-aged (prime) females had the highest probability of giving birth when they had not reproduced the year before, but no such relationship with age was found among females that had reproduced the previous year, indicating that prime-aged females bear higher costs of reproduction. We show that individual quality and age were key factors modulating the costs of reproduction in a capital breeder but that

  16. Classic conditioning in aged rabbits: delay, trace, and long-delay conditioning.

    Science.gov (United States)

    Solomon, P R; Groccia-Ellison, M E

    1996-06-01

    Young (0.5 years) and aged (2+, 3+, and 4+ years) rabbits underwent acquisition of the classically conditioned nictitating membrane response in a delay (500-ms conditioned stimulus [CS], 400-ms interstimulus interval [ISI]), long-delay (1,000-ms CS, 900-ms ISI), or trace (500-ms CS, 400-ms stimulus-free period) paradigm. Collapsing across age groups, there is a general tendency for animals to acquire trace conditioning more slowly than delay conditioning. Collapsing across conditioning paradigms, there is a general tendency for aged animals to acquire more slowly than younger animals. Of greater significance, however, are the age differences in the different conditioning paradigms. In the delay and long-delay paradigms, significant conditioning deficits first appeared in the 4(+)-year-old group. In the trace conditioning paradigm, significant conditioning deficits became apparent in the 2(+)-year-old animals.

  17. Quantum-correlation breaking channels, quantum conditional probability and Perron-Frobenius theory

    Science.gov (United States)

    Chruściński, Dariusz

    2013-03-01

    Using the quantum analog of conditional probability and classical Bayes theorem we discuss some aspects of particular entanglement breaking channels: quantum-classical and classical-classical channels. Applying the quantum analog of Perron-Frobenius theorem we generalize the recent result of Korbicz et al. (2012) [8] on full and spectrum broadcasting from quantum-classical channels to arbitrary quantum channels.

  18. Quantum-correlation breaking channels, quantum conditional probability and Perron–Frobenius theory

    International Nuclear Information System (INIS)

    Chruściński, Dariusz

    2013-01-01

    Using the quantum analog of conditional probability and classical Bayes theorem we discuss some aspects of particular entanglement breaking channels: quantum–classical and classical–classical channels. Applying the quantum analog of Perron–Frobenius theorem we generalize the recent result of Korbicz et al. (2012) [8] on full and spectrum broadcasting from quantum–classical channels to arbitrary quantum channels.

  19. Probability of developing severe sepsis in patients of elderly and senile age with necrotic erysipelas

    Directory of Open Access Journals (Sweden)

    Shapkin Yu.G.

    2015-06-01

    Full Text Available Objective: the probable determination of severe sepsis in patients of elderly and senile age with necrotic erysipelas based on a comprehensive assessment (clinical examination using systems — scales and determination of the level markers of SIRS. Material and methods. The analysis of peculiarities of necrotic erysipelas clinical course in 59 patients. The first group consisted of 17 patients with severe sepsis, the second — 18 patients with sepsis without multiple organ failure, in the comparison group —22 patients with local infection. We determined albumin, urea, creatinine, pro-calcitonin of plasma. The scale SAPS III was used to quantify SIRS, scale SOFA —to determine the extent of damage to organs and systems. Results. The most sensitive marker of developing sepsis in patients with necrotic erysipelas was procalcitonin. The second important indicator of SIRS severity in patients with necrotic erysipelas was the blood albumin. Scale SAPS III also allows to select a group of patients with high risk of developing severe sepsis. Use of the SOFA to predict the scale has been found out to be less important. Conclusion. A comprehensive assessment of the severity of the condition by scale SAPS III in combination with determining the levels of procalcitonin and plasma albumin is advisable to apply for prediction the probability of developing severe sepsis in patients of elderly and senile age with necrotic erysipelas. For the last indicator it is important to assess of absolute values and the decrease of its concentration.

  20. Improved techniques for outgoing wave variational principle calculations of converged state-to-state transition probabilities for chemical reactions

    Science.gov (United States)

    Mielke, Steven L.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    Improved techniques and well-optimized basis sets are presented for application of the outgoing wave variational principle to calculate converged quantum mechanical reaction probabilities. They are illustrated with calculations for the reactions D + H2 yields HD + H with total angular momentum J = 3 and F + H2 yields HF + H with J = 0 and 3. The optimization involves the choice of distortion potential, the grid for calculating half-integrated Green's functions, the placement, width, and number of primitive distributed Gaussians, and the computationally most efficient partition between dynamically adapted and primitive basis functions. Benchmark calculations with 224-1064 channels are presented.

  1. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  2. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper we present a simple but efficient parallel algorithm based on the message passing host/node programing model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, witch is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SP1 and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SP1. Because of heterogeneity of the workstation network, we did ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors. (author). 5 refs., 6 figs., 2 tabs

  3. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  4. Eliciting conditional and unconditional rank correlations from conditional probabilities

    International Nuclear Information System (INIS)

    Morales, O.; Kurowicka, D.; Roelen, A.

    2008-01-01

    Causes of uncertainties may be interrelated and may introduce dependencies. Ignoring these dependencies may lead to large errors. A number of graphical models in probability theory such as dependence trees, vines and (continuous) Bayesian belief nets [Cooke RM. Markov and entropy properties of tree and vine-dependent variables. In: Proceedings of the ASA section on Bayesian statistical science, 1997; Kurowicka D, Cooke RM. Distribution-free continuous Bayesian belief nets. In: Proceedings of mathematical methods in reliability conference, 2004; Bedford TJ, Cooke RM. Vines-a new graphical model for dependent random variables. Ann Stat 2002; 30(4):1031-68; Kurowicka D, Cooke RM. Uncertainty analysis with high dimensional dependence modelling. New York: Wiley; 2006; Hanea AM, et al. Hybrid methods for quantifying and analyzing Bayesian belief nets. In: Proceedings of the 2005 ENBIS5 conference, 2005; Shachter RD, Kenley CR. Gaussian influence diagrams. Manage Sci 1998; 35(5) .] have been developed to capture dependencies between random variables. The input for these models are various marginal distributions and dependence information, usually in the form of conditional rank correlations. Often expert elicitation is required. This paper focuses on dependence representation, and dependence elicitation. The techniques presented are illustrated with an application from aviation safety

  5. Conditional Independence in Applied Probability.

    Science.gov (United States)

    Pfeiffer, Paul E.

    This material assumes the user has the background provided by a good undergraduate course in applied probability. It is felt that introductory courses in calculus, linear algebra, and perhaps some differential equations should provide the requisite experience and proficiency with mathematical concepts, notation, and argument. The document is…

  6. The considering of the slowing down effect in the formalism of probability tables. Application to the effective cross section calculation

    International Nuclear Information System (INIS)

    Bouhelal, O.K.A.

    1990-01-01

    The exact determination of the effective multigroup cross sections imposes the numerical solution of the slowing down equation on a very fine energy mesh. Given the complexity of these calculations, different approximation methods have been developed but without a satisfactory treatment of the slowing-down effect. The usual methods are essentially based on interpolations using precalculated tables. The models that use the probability tables allow to reduce the amount of data and the computational effort. A variety of methods proposed by Soviets, then by Americans, and finally the French method, based on the ''moments of a probability distribution'' are incontestably valid within the framework of the statistical hypothesis. This stipulates that the collision densities do not depend on cross section and there is no ambiguity in the effective cross section calculation. The objective of our work is to show that the non statistical phenomena, such as the slowing-down effect which is taken into account, can be described by probability tables which are able to represent the neutronic values and collision densities. The formalism involved in the statistical hypothesis, is based on the Gauss quadrature of the cross sections moments. In the non-statistical hypothesis we introduce the crossed probability tables using the quadratures of double integrals of cross sections, comments. Moreover, a mathematical formalism allowing to establish a relationship between the crossed probability tables and the collision densities was developed. This method was applied on uranium-238 in the range of resolved resonances where the slowing down effect is significant. Validity of the method and the analysis of the obtained results are studied through a reference calculation based on a solution of a discretized slowing down equation using a very fine mesh in which each microgroup can be correctly defined via the statistical probability tables. 42 figs., 32 tabs., 49 refs. (author)

  7. Pade approximant calculations for neutron escape probability

    International Nuclear Information System (INIS)

    El Wakil, S.A.; Saad, E.A.; Hendi, A.A.

    1984-07-01

    The neutron escape probability from a non-multiplying slab containing internal source is defined in terms of a functional relation for the scattering function for the diffuse reflection problem. The Pade approximant technique is used to get numerical results which compare with exact results. (author)

  8. Lung deposition of sub-micron aerosols calculated as a function of age and breathing rate

    International Nuclear Information System (INIS)

    James, A.C.

    1978-01-01

    Experimental measurements of lung deposition and especially of regional deposition, of aerosols in the sub-micron size range have been so few that it is worthwhile establishing a method of calculation. A computer routine has therefore been developed to calculate aerosol deposition in successive bronchial and bronchiolar generations of the Weibel 'A' model of human lung for the sub-micron size range where deposition occurs solely by diffusion. This model can be scaled to represent lungs at various ages and vital capacities. Some calculated results are presented here and compared with measurements of lung deposition made under carefully controlled conditions in humans. (author)

  9. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    Science.gov (United States)

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving

  10. Risk, probability and uncertainty in the calculations of gas cooled reactor of PBMR type. Part 2

    International Nuclear Information System (INIS)

    Serbanescu, Dan

    2004-01-01

    The paper presents the main conclusions of the insights to a cooled gas reactor from the perspective of the following notions: probability, uncertainty, entropy and risk. Some results of the on-going comparison between the insights obtained from three models and approaches are presented. The approaches consider the Pebble Bed Module Reactor (PBMR) NPP as a thermodynamic installation and as hierarchical system with or without considering the information exchange between its various levels. The existing model was a basis for a PRA going on in phases for PBMR. In the first part of this paper results from phase II of this PRA were presented. Further activities going on in the preparation for phase II PRA and for the development of a specific application of using PRA during the design phases for PBMR are undergoing with some preliminary results and conclusions. However, for the purposes of this paper and the comparative review of various models in the part two one presents the risk model (model B) based on the assumption and ideas laid down at the basis of the future inter-comparison of this model with other plant models. The assumptions concern: the uncertainties for the quantification of frequencies; list of initiated events; interfaces with the deterministic calculation; integrated evaluation of all the plant states; risk of the release of radionuclide; the balance between the number and function of the active systems and the passive systems; systems interdependencies in PBMR PRA; use of PRA for the evaluation of the impact of various design changes on plant risk. The model B allows basically evaluating the level of risk of the plant by calculating it as a result of acceptance challenge to the plant. By using this model the departure from a reference state is given by the variation in the risk metrics adopted for the study. The paper present also the synergetic model (model C). The evaluation of risk in the model C is considering also the information process. The

  11. Improvement of the neutron flux calculations in thick shield by conditional Monte Carlo and deterministic methods

    International Nuclear Information System (INIS)

    Ghassoun, Jillali; Jehoauni, Abdellatif

    2000-01-01

    In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)

  12. Evolvement simulation of the probability of neutron-initiating persistent fission chain

    International Nuclear Information System (INIS)

    Wang Zhe; Hong Zhenying

    2014-01-01

    Background: Probability of neutron-initiating persistent fission chain, which has to be calculated in analysis of critical safety, start-up of reactor, burst waiting time on pulse reactor, bursting time on pulse reactor, etc., is an inherent parameter in a multiplying assembly. Purpose: We aim to derive time-dependent integro-differential equation for such probability in relative velocity space according to the probability conservation, and develop the deterministic code Dynamic Segment Number Probability (DSNP) based on the multi-group S N method. Methods: The reliable convergence of dynamic calculation was analyzed and numerical simulation of the evolvement process of dynamic probability for varying concentration was performed under different initial conditions. Results: On Highly Enriched Uranium (HEU) Bare Spheres, when the time is long enough, the results of dynamic calculation approach to those of static calculation. The most difference of such results between DSNP and Partisn code is less than 2%. On Baker model, over the range of about 1 μs after the first criticality, the most difference between the dynamic and static calculation is about 300%. As for a super critical system, the finite fission chains decrease and the persistent fission chains increase as the reactivity aggrandizes, the dynamic evolvement curve of initiation probability is close to the static curve within the difference of 5% when the K eff is more than 1.2. The cumulative probability curve also indicates that the difference of integral results between the dynamic calculation and the static calculation decreases from 35% to 5% as the K eff increases. This demonstrated that the ability of initiating a self-sustaining fission chain reaction approaches stabilization, while the former difference (35%) showed the important difference of the dynamic results near the first criticality with the static ones. The DSNP code agrees well with Partisn code. Conclusions: There are large numbers of

  13. Failures probability calculation of the energy supply of the Angra-1 reactor rods assembly

    International Nuclear Information System (INIS)

    Borba, P.R.

    1978-01-01

    This work analyses the electric power system of the Angra I PWR plant. It is demonstrated that this system is closely coupled with the safety engineering features, which are the equipments provided to prevent, limit, or mitigate the release of radioactive material and to permit the safe reactor shutdown. Event trees are used to analyse the operation of those systems which can lead to the release of radioactivity following a specified initial event. The fault trees technique is used to calculate the failure probability of the on-site electric power system [pt

  14. Combining scenarios in a calculation of the overall probability distribution of cumulative releases of radioactivity from the Waste Isolation Pilot Plant, southeastern New Mexico

    International Nuclear Information System (INIS)

    Tierney, M.S.

    1991-11-01

    The Waste Isolation Pilot Plant (WIPP), in southeastern New Mexico, is a research and development facility to demonstrate safe disposal of defense-generated transuranic waste. The US Department of Energy will designate WIPP as a disposal facility if it meets the US Environmental Protection Agency's standard for disposal of such waste; the standard includes a requirement that estimates of cumulative releases of radioactivity to the accessible environment be incorporated in an overall probability distribution. The WIPP Project has chosen an approach to calculation of an overall probability distribution that employs the concept of scenarios for release and transport of radioactivity to the accessible environment. This report reviews the use of Monte Carlo methods in the calculation of an overall probability distribution and presents a logical and mathematical foundation for use of the scenario concept in such calculations. The report also draws preliminary conclusions regarding the shape of the probability distribution for the WIPP system; preliminary conclusions are based on the possible occurrence of three events and the presence of one feature: namely, the events ''attempted boreholes over rooms and drifts,'' ''mining alters ground-water regime,'' ''water-withdrawal wells provide alternate pathways,'' and the feature ''brine pocket below room or drift.'' Calculation of the WIPP systems's overall probability distributions for only five of sixteen possible scenario classes that can be obtained by combining the four postulated events or features

  15. On estimating the fracture probability of nuclear graphite components

    International Nuclear Information System (INIS)

    Srinivasan, Makuteswara

    2008-01-01

    The properties of nuclear grade graphites exhibit anisotropy and could vary considerably within a manufactured block. Graphite strength is affected by the direction of alignment of the constituent coke particles, which is dictated by the forming method, coke particle size, and the size, shape, and orientation distribution of pores in the structure. In this paper, a Weibull failure probability analysis for components is presented using the American Society of Testing Materials strength specification for nuclear grade graphites for core components in advanced high-temperature gas-cooled reactors. The risk of rupture (probability of fracture) and survival probability (reliability) of large graphite blocks are calculated for varying and discrete values of service tensile stresses. The limitations in these calculations are discussed from considerations of actual reactor environmental conditions that could potentially degrade the specification properties because of damage due to complex interactions between irradiation, temperature, stress, and variability in reactor operation

  16. Relativistic calculation of Kβ hypersatellite energies and transition probabilities for selected atoms with 13 ≤ Z ≤ 80

    International Nuclear Information System (INIS)

    Costa, A M; Martins, M C; Santos, J P; Indelicato, P; Parente, F

    2006-01-01

    Energies and transition probabilities of Kβ hypersatellite lines are computed using the Dirac-Fock model for several values of Z throughout the periodic table. The influence of the Breit interaction on the energy shifts from the corresponding diagram lines and on the Kβ h 1 /Kβ h 3 intensity ratio is evaluated. The widths of the double-K hole levels are calculated for Al and Sc. The results are compared to experiment and to other theoretical calculations

  17. Calculation of the uncertainty in complication probability for various dose-response models, applied to the parotid gland

    International Nuclear Information System (INIS)

    Schilstra, C.; Meertens, H.

    2001-01-01

    Purpose: Usually, models that predict normal tissue complication probability (NTCP) are fitted to clinical data with the maximum likelihood (ML) method. This method inevitably causes a loss of information contained in the data. In this study, an alternative method is investigated that calculates the parameter probability distribution (PD), and, thus, conserves all information. The PD method also allows the calculation of the uncertainty in the NTCP, which is an (often-neglected) prerequisite for the intercomparison of both treatment plans and NTCP models. The PD and ML methods are applied to parotid gland data, and the results are compared. Methods and Materials: The drop in salivary flow due to radiotherapy was measured in 25 parotid glands of 15 patients. Together with the parotid gland dose-volume histograms (DVH), this enabled the calculation of the parameter PDs for three different NTCP models (Lyman, relative seriality, and critical volume). From these PDs, the NTCP and its uncertainty could be calculated for arbitrary parotid gland DVHs. ML parameters and resulting NTCP values were calculated also. Results: All models fitted equally well. The parameter PDs turned out to have nonnormal shapes and long tails. The NTCP predictions of the ML and PD method usually differed considerably, depending on the NTCP model and the nature of irradiation. NTCP curves and ML parameters suggested a highly parallel organization of the parotid gland. Conclusions: Considering the substantial differences between the NTCP predictions of the ML and PD method, the use of the PD method is preferred, because this is the only method that takes all information contained in the clinical data into account. Furthermore, PD method gives a true measure of the uncertainty in the NTCP

  18. Waste Package Misload Probability

    International Nuclear Information System (INIS)

    Knudsen, J.K.

    2001-01-01

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a

  19. Probability analysis of WWER-1000 fuel elements behavior under steady-state, transient and accident conditions of reactor operation

    International Nuclear Information System (INIS)

    Tutnov, A.; Alexeev, E.

    2001-01-01

    'PULSAR-2' and 'PULSAR+' codes make it possible to simulate thermo-mechanical and thermo-physical parameters of WWER fuel elements. The probabilistic approach is used instead of traditional deterministic one to carry out a sensitive study of fuel element behavior under steady-state operation mode. Fuel elements initial parameters are given as a density of the probability distributions. Calculations are provided for all possible combinations of initial data as fuel-cladding gap, fuel density and gas pressure. Dividing values of these parameters to intervals final variants for calculations are obtained . Intervals of permissible fuel-cladding gap size have been divided to 10 equal parts, fuel density and gas pressure - to 5 parts. Probability of each variant realization is determined by multiplying the probabilities of separate parameters, because the tolerances of these parameters are distributed independently. Simulation results are turn out in the probabilistic bar charts. The charts present probability distribution of the changes in fuel outer diameter, hoop stress kinetics and fuel temperature versus irradiation time. A normative safety factor is introduced for control of any criterion realization and for determination of a reserve to the criteria failure. A probabilistic analysis of fuel element behavior under Reactivity Initiating Accident (RIA) is also performed and probability fuel element depressurization under hypothetical RIA is presented

  20. A probability evaluation method of early deterioration condition for the critical components of wind turbine generator systems

    DEFF Research Database (Denmark)

    Hu, Y.; Li, H.; Liao, X

    2016-01-01

    method of early deterioration condition for critical components based only on temperature characteristic parameters. First, the dynamic threshold of deterioration degree function was proposed by analyzing the operational data between temperature and rotor speed. Second, a probability evaluation method...... of early deterioration condition was presented. Finally, two cases showed the validity of the proposed probability evaluation method in detecting early deterioration condition and in tracking their further deterioration for the critical components.......This study determines the early deterioration condition of critical components for a wind turbine generator system (WTGS). Due to the uncertainty nature of the fluctuation and intermittence of wind, early deterioration condition evaluation poses a challenge to the traditional vibration...

  1. The Probability of Neonatal Respiratory Distress Syndrome as a Function of Gestational Age and Lecithin/Sphingomyelin Ratio

    Science.gov (United States)

    St. Clair, Caryn; Norwitz, Errol R.; Woensdregt, Karlijn; Cackovic, Michael; Shaw, Julia A.; Malkus, Herbert; Ehrenkranz, Richard A.; Illuzzi, Jessica L.

    2011-01-01

    We sought to define the risk of neonatal respiratory distress syndrome (RDS) as a function of both lecithin/sphingomyelin (L/S) ratio and gestational age. Amniotic fluid L/S ratio data were collected from consecutive women undergoing amniocentesis for fetal lung maturity at Yale-New Haven Hospital from January 1998 to December 2004. Women were included in the study if they delivered a live-born, singleton, nonanomalous infant within 72 hours of amniocentesis. The probability of RDS was modeled using multivariate logistic regression with L/S ratio and gestational age as predictors. A total of 210 mother-neonate pairs (8 RDS, 202 non-RDS) met criteria for analysis. Both gestational age and L/S ratio were independent predictors of RDS. A probability of RDS of 3% or less was noted at an L/S ratio cutoff of ≥3.4 at 34 weeks, ≥2.6 at 36 weeks, ≥1.6 at 38 weeks, and ≥1.2 at term. Under 34 weeks of gestation, the prevalence of RDS was so high that a probability of 3% or less was not observed by this model. These data describe a means of stratifying the probability of neonatal RDS using both gestational age and the L/S ratio and may aid in clinical decision making concerning the timing of delivery. PMID:18773379

  2. Use of heterogeneous finite elements generated by collision probability solutions to calculate a pool reactor core

    International Nuclear Information System (INIS)

    Calabrese, C.R.; Grant, C.R.

    1990-01-01

    This work presents comparisons between measured fluxes obtained by activation of Manganese foils in the light water, enriched uranium research pool reactor RA-2 MTR (Materials Testing Reactors) fuel element) and fluxes calculated by the finite element method FEM using DELFIN code, and describes the heterogeneus finite elements by a set of solutions of the transport equations for several different configurations obtained using the collision probability code HUEMUL. The agreement between calculated and measured fluxes is good, and the advantage of using FEM is showed because to obtain the flux distribution with same detail using an usual diffusion calculation it would be necessary 12000 mesh points against the 2000 points that FEM uses, hence the processing time is reduced in a factor ten. An interesting alternative to use in MTR fuel management is presented. (Author) [es

  3. Heightened fire probability in Indonesia in non-drought conditions: the effect of increasing temperatures

    Science.gov (United States)

    Fernandes, Kátia; Verchot, Louis; Baethgen, Walter; Gutierrez-Velez, Victor; Pinedo-Vasquez, Miguel; Martius, Christopher

    2017-05-01

    In Indonesia, drought driven fires occur typically during the warm phase of the El Niño Southern Oscillation. This was the case of the events of 1997 and 2015 that resulted in months-long hazardous atmospheric pollution levels in Equatorial Asia and record greenhouse gas emissions. Nonetheless, anomalously active fire seasons have also been observed in non-drought years. In this work, we investigated the impact of temperature on fires and found that when the July-October (JASO) period is anomalously dry, the sensitivity of fires to temperature is modest. In contrast, under normal-to-wet conditions, fire probability increases sharply when JASO is anomalously warm. This describes a regime in which an active fire season is not limited to drought years. Greater susceptibility to fires in response to a warmer environment finds support in the high evapotranspiration rates observed in normal-to-wet and warm conditions in Indonesia. We also find that fire probability in wet JASOs would be considerably less sensitive to temperature were not for the added effect of recent positive trends. Near-term regional climate projections reveal that, despite negligible changes in precipitation, a continuing warming trend will heighten fire probability over the next few decades especially in non-drought years. Mild fire seasons currently observed in association with wet conditions and cool temperatures will become rare events in Indonesia.

  4. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    Science.gov (United States)

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  5. Approximate first collision probabilities for neutrons in cylindrical and cluster lattices

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1979-05-01

    Methods for calculating approximate first collision probabilities for neutrons in cylindrical and cluster lattices are presented and compared with numerical solution methods. The methods differ from those of other authors in the inclusion of anisotropic boundary conditions for both geometries. The methods, which are fast enough for routine use in multigroup and resonance subgroup calculations, have been coded in FORTRAN and included in modules of the AUS scheme for reactor neutronics calculations

  6. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  7. New results to BDD truncation method for efficient top event probability calculation

    International Nuclear Information System (INIS)

    Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang

    2012-01-01

    A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.

  8. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    Energy Technology Data Exchange (ETDEWEB)

    Wampler, William R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Myers, Samuel M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Modine, Normand A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  9. Hawkes-diffusion process and the conditional probability of defaults in the Eurozone

    Science.gov (United States)

    Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin

    2016-05-01

    This study examines market information embedded in the European sovereign CDS (credit default swap) market by analyzing the sovereign CDSs of 13 Eurozone countries from January 1, 2008, to February 29, 2012, which includes the recent Eurozone debt crisis period. We design the conditional probability of defaults for the CDS prices based on the Hawkes-diffusion process and obtain the theoretical prices of CDS indexes. To estimate the model parameters, we calibrate the model prices to empirical prices obtained from individual sovereign CDS term structure data. The estimated parameters clearly explain both cross-sectional and time-series data. Our empirical results show that the probability of a huge loss event sharply increased during the Eurozone debt crisis, indicating a contagion effect. Even countries with strong and stable economies, such as Germany and France, suffered from the contagion effect. We also find that the probability of small events is sensitive to the state of the economy, spiking several times due to the global financial crisis and the Greek government debt crisis.

  10. Probability analysis of MCO over-pressurization during staging

    International Nuclear Information System (INIS)

    Pajunen, A.L.

    1997-01-01

    The purpose of this calculation is to determine the probability of Multi-Canister Overpacks (MCOs) over-pressurizing during staging at the Canister Storage Building (CSB). Pressurization of an MCO during staging is dependent upon changes to the MCO gas temperature and the build-up of reaction products during the staging period. These effects are predominantly limited by the amount of water that remains in the MCO following cold vacuum drying that is available for reaction during staging conditions. Because of the potential for increased pressure within an MCO, provisions for a filtered pressure relief valve and rupture disk have been incorporated into the MCO design. This calculation provides an estimate of the frequency that an MCO will contain enough water to pressurize beyond the limits of these design features. The results of this calculation will be used in support of further safety analyses and operational planning efforts. Under the bounding steady state CSB condition assumed for this analysis, an MCO must contain less than 1.6 kg (3.7 lbm) of water available for reaction to preclude actuation of the pressure relief valve at 100 psid. To preclude actuation of the MCO rupture disk at 150 psid, an MCO must contain less than 2.5 kg (5.5 lbm) of water available for reaction. These limits are based on the assumption that hydrogen generated by uranium-water reactions is the sole source of gas produced within the MCO and that hydrates in fuel particulate are the primary source of water available for reactions during staging conditions. The results of this analysis conclude that the probability of the hydrate water content of an MCO exceeding 1.6 kg is 0.08 and the probability that it will exceed 2.5 kg is 0.01. This implies that approximately 32 of 400 staged MCOs may experience pressurization to the point where the pressure relief valve actuates. In the event that an MCO pressure relief valve fails to open, the probability is 1 in 100 that the MCO would experience

  11. A Hierarchy of Compatibility and Comeasurability Levels in Quantum Logics with Unique Conditional Probabilities

    International Nuclear Information System (INIS)

    Niestegge, Gerd

    2010-01-01

    In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lueders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases. (general)

  12. Calculation of optimal outdoor enclosure in the arctic conditions

    Science.gov (United States)

    Tarabukina, Sardaana; Simankina, Tatyana; Pykhtin, Kirill; Grabovyy, Kirill

    2017-10-01

    Definition of optimal thickness of thermal insulating materials, prevention of frost penetration and overheat and provision of proper thermal efficiency is an important problem in arctic conditions. This article demonstrates the results of thermotechnical calculations of enclosing constructions using SHADDAN software and economic calculations made in RIK software. These results allowed us to perform comparative analysis of two building technologies: «thermal block» and «render system». Both options met regulatory heat transfer requirements. However, regarding cost efficiency, use of «thermal blocks» technology is more effective in arctic conditions.

  13. Statistical probability tables CALENDF program

    International Nuclear Information System (INIS)

    Ribon, P.

    1989-01-01

    The purpose of the probability tables is: - to obtain dense data representation - to calculate integrals by quadratures. They are mainly used in the USA for calculations by Monte Carlo and in the USSR and Europe for self-shielding calculations by the sub-group method. The moment probability tables, in addition to providing a more substantial mathematical basis and calculation methods, are adapted for condensation and mixture calculations, which are the crucial operations for reactor physics specialists. However, their extension is limited by the statistical hypothesis they imply. Efforts are being made to remove this obstacle, at the cost, it must be said, of greater complexity

  14. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  15. Survival probability for diffractive dijet production in p anti p collisions from next-to-leading order calculations

    International Nuclear Information System (INIS)

    Klasen, M.; Kramer, G.

    2009-08-01

    We perform next-to-leading order calculations of the single-diffractive and non-diffractive cross sections for dijet production in proton-antiproton collisions at the Tevatron. By comparing their ratio to the data published by the CDF collaboration for two different center-of-mass energies, we deduce the rapidity-gap survival probability as a function of the momentum fraction of the parton in the antiproton. Assuming Regge factorization, this probability can be interpreted as a suppression factor for the diffractive structure function measured in deep-inelastic scattering at HERA. In contrast to the observations for photoproduction, the suppression factor in protonantiproton collisions depends on the momentum fraction of the parton in the Pomeron even at next-to-leading order. (orig.)

  16. Condition-based fault tree analysis (CBFTA): A new method for improved fault tree analysis (FTA), reliability and safety calculations

    International Nuclear Information System (INIS)

    Shalev, Dan M.; Tiran, Joseph

    2007-01-01

    Condition-based maintenance methods have changed systems reliability in general and individual systems in particular. Yet, this change does not affect system reliability analysis. System fault tree analysis (FTA) is performed during the design phase. It uses components failure rates derived from available sources as handbooks, etc. Condition-based fault tree analysis (CBFTA) starts with the known FTA. Condition monitoring (CM) methods applied to systems (e.g. vibration analysis, oil analysis, electric current analysis, bearing CM, electric motor CM, and so forth) are used to determine updated failure rate values of sensitive components. The CBFTA method accepts updated failure rates and applies them to the FTA. The CBFTA recalculates periodically the top event (TE) failure rate (λ TE ) thus determining the probability of system failure and the probability of successful system operation-i.e. the system's reliability. FTA is a tool for enhancing system reliability during the design stages. But, it has disadvantages, mainly it does not relate to a specific system undergoing maintenance. CBFTA is tool for updating reliability values of a specific system and for calculating the residual life according to the system's monitored conditions. Using CBFTA, the original FTA is ameliorated to a practical tool for use during the system's field life phase, not just during system design phase. This paper describes the CBFTA method and its advantages are demonstrated by an example

  17. Flipping Out: Calculating Probability with a Coin Game

    Science.gov (United States)

    Degner, Kate

    2015-01-01

    In the author's experience with this activity, students struggle with the idea of representativeness in probability. Therefore, this student misconception is part of the classroom discussion about the activities in this lesson. Representativeness is related to the (incorrect) idea that outcomes that seem more random are more likely to happen. This…

  18. Calculation of the Chilling Requirement for Air Conditioning in the Excavation Roadway

    Directory of Open Access Journals (Sweden)

    Yueping Qin

    2015-10-01

    Full Text Available To effectively improve the climate conditions of the excavation roadway in coal mine, the calculation of the chilling requirement taking air conditioning measures is extremely necessary. The temperature field of the surrounding rock with moving boundary in the excavation roadway was numerically simulated by using finite volume method. The unstable heat transfer coefficient between the surrounding rock and air flow was obtained via the previous calculation. According to the coupling effects of the air flow inside and outside air duct, the differential calculation mathematical model of air flow temperature in the excavation roadway was established. The chilling requirement was calculated with the selfdeveloped computer program for forecasting the required cooling capacity of the excavation roadway. A good air conditioning effect had been observed after applying the calculated results to field trial, which indicated that the prediction method and calculation procedure were reliable.

  19. Explaining regional disparities in traffic mortality by decomposing conditional probabilities.

    Science.gov (United States)

    Goldstein, Gregory P; Clark, David E; Travis, Lori L; Haskins, Amy E

    2011-04-01

    In the USA, the mortality rate from traffic injury is higher in rural and in southern regions, for reasons that are not well understood. For 1754 (56%) of the 3142 US counties, we obtained data allowing for separation of the deaths/population rate into deaths/injury, injuries/crash, crashes/exposure and exposure/population, with exposure measured as vehicle miles travelled. A 'decomposition method' proposed by Li and Baker was extended to study how the contributions of these components were affected by three measures of rural location, as well as southern location. The method of Li and Baker extended without difficulty to include non-binary effects and multiple exposures. Deaths/injury was by far the most important determinant in the county-to-county variation in deaths/population, and accounted for the greatest portion of the rural/urban disparity. After controlling for the rural effect, injuries/crash accounted for most of the southern/northern disparity. The increased mortality rate from traffic injury in rural areas can be attributed to the increased probability of death given that a person has been injured, possibly due to challenges faced by emergency medical response systems. In southern areas, there is an increased probability of injury given that a person has crashed, possibly due to differences in vehicle, road, or driving conditions.

  20. Thermal and mechanical quantitative sensory testing in Chinese patients with burning mouth syndrome--a probable neuropathic pain condition?

    Science.gov (United States)

    Mo, Xueyin; Zhang, Jinglu; Fan, Yuan; Svensson, Peter; Wang, Kelun

    2015-01-01

    To explore the hypothesis that burning mouth syndrome (BMS) probably is a neuropathic pain condition, thermal and mechanical sensory and pain thresholds were tested and compared with age- and gender-matched control participants using a standardized battery of psychophysical techniques. Twenty-five BMS patients (men: 8, women: 17, age: 49.5 ± 11.4 years) and 19 age- and gender-matched healthy control participants were included. The cold detection threshold (CDT), warm detection threshold (WDT), cold pain threshold (CPT), heat pain threshold (HPT), mechanical detection threshold (MDT) and mechanical pain threshold (MPT), in accordance with the German Network of Neuropathic Pain guidelines, were measured at the following four sites: the dorsum of the left hand (hand), the skin at the mental foramen (chin), on the tip of the tongue (tongue), and the mucosa of the lower lip (lip). Statistical analysis was performed using ANOVA with repeated measures to compare the means within and between groups. Furthermore, Z-score profiles were generated, and exploratory correlation analyses between QST and clinical variables were performed. Two-tailed tests with a significance level of 5 % were used throughout. CDTs (P < 0.02) were significantly lower (less sensitivity) and HPTs (P < 0.001) were significantly higher (less sensitivity) at the tongue and lip in BMS patients compared to control participants. WDT (P = 0.007) was also significantly higher at the tongue in BMS patients compared to control subjects . There were no significant differences in MDT and MPT between the BMS patients and healthy subjects at any of the four test sites. Z-scores showed that significant loss of function can be identified for CDT (Z-scores = -0.9±1.1) and HPT (Z-scores = 1.5±0.4). There were no significant correlations between QST and clinical variables (pain intensity, duration, depressions scores). BMS patients had a significant loss of thermal function but not

  1. The Work Sample Verification and the Calculation of the Statistical, Mathematical and Economical Probability for the Risks of the Direct Procurement

    Directory of Open Access Journals (Sweden)

    Lazăr Cristiana Daniela

    2017-01-01

    Full Text Available Each organization has among its multiple secondary endpoints subordinated to a centralobjective that one of avoiding the contingencies. The direct procurement is carried out on themarket in SEAP (Electronic System of Public Procurement, and a performing management in apublic institution has as sub-base and risk management. The risks may be investigated byeconometric simulation, which is calculated by the use of calculus of probability and the sample fordetermining the relevance of these probabilities.

  2. Interactive effects of senescence and natural disturbance on the annual survival probabilities of snail kites

    Science.gov (United States)

    Reichert, Brian E.; Martin, J.; Kendall, William L.; Cattau, Christopher E.; Kitchens, Wiley M.

    2010-01-01

    Individuals in wild populations face risks associated with both intrinsic (i.e. aging) and external (i.e. environmental) sources of mortality. Condition-dependent mortality occurs when there is an interaction between such factors; however, few studies have clearly demonstrated condition-dependent mortality and some have even argued that condition-dependent mortality does not occur in wild avian populations. Using large sample sizes (2084 individuals, 3746 re-sights) of individual-based longitudinal data collected over a 33 year period (1976-2008) on multiple cohorts, we used a capture-mark-recapture framework to model age-dependent survival in the snail kite Rostrhamus sociabilis plumbeus population in Florida. Adding to the growing amount of evidence for actuarial senescence in wild populations, we found evidence of senescent declines in survival probabilities in adult kites. We also tested the hypothesis that older kites experienced condition-dependent mortality during a range-wide drought event (2000-2002). The results provide convincing evidence that the annual survival probability of senescent kites was disproportionately affected by the drought relative to the survival probability of prime-aged adults. To our knowledge, this is the first evidence of condition-dependent mortality to be demonstrated in a wild avian population, a finding which challenges recent conclusions drawn in the literature. Our study suggests that senescence and condition-dependent mortality can affect the demography of wild avian populations. Accounting for these sources of variation may be particularly important to appropriately compute estimates of population growth rate, and probabilities of quasi-extinctions.

  3. Estimation of post-test probabilities by residents: Bayesian reasoning versus heuristics?

    Science.gov (United States)

    Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P; Ghali, William; Wright, Bruce; McLaughlin, Kevin

    2014-08-01

    Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of disease probability estimates. In this study our objective was to explore whether Internal Medicine residents use a Bayesian process to estimate disease probabilities by comparing their disease probability estimates to literature-derived Bayesian post-test probabilities. We gave 35 Internal Medicine residents four clinical vignettes in the form of a referral letter and asked them to estimate the post-test probability of the target condition in each case. We then compared these to literature-derived probabilities. For each vignette the estimated probability was significantly different from the literature-derived probability. For the two cases with low literature-derived probability our participants significantly overestimated the probability of these target conditions being the correct diagnosis, whereas for the two cases with high literature-derived probability the estimated probability was significantly lower than the calculated value. Our results suggest that residents generate inaccurate post-test probability estimates. Possible explanations for this include ineffective application of Bayesian reasoning, attribute substitution whereby a complex cognitive task is replaced by an easier one (e.g., a heuristic), or systematic rater bias, such as central tendency bias. Further studies are needed to identify the reasons for inaccuracy of disease probability estimates and to explore ways of improving accuracy.

  4. Estimation of the age-specific per-contact probability of Ebola virus transmission in Liberia using agent-based simulations

    Science.gov (United States)

    Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios

    2016-06-01

    Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.

  5. Qualification of the calculational methods of the fluence in the pressurised water reactors. Improvement of the cross sections treatment by the probability table method

    International Nuclear Information System (INIS)

    Zheng, S.H.

    1994-01-01

    It is indispensable to know the fluence on the nuclear reactor pressure vessel. The cross sections and their treatment have an important rule to this problem. In this study, two ''benchmarks'' have been interpreted by the Monte Carlo transport program TRIPOLI to qualify the calculational method and the cross sections used in the calculations. For the treatment of the cross sections, the multigroup method is usually used but it exists some problems such as the difficulty to choose the weighting function and the necessity of a great number of energy to represent well the cross section's fluctuation. In this thesis, we propose a new method called ''Probability Table Method'' to treat the neutron cross sections. For the qualification, a program of the simulation of neutron transport by the Monte Carlo method in one dimension has been written; the comparison of multigroup's results and probability table's results shows the advantages of this new method. The probability table has also been introduced in the TRIPOLI program; the calculational results of the iron deep penetration benchmark has been improved by comparing with the experimental results. So it is interest to use this new method in the shielding and neutronic calculation. (author). 42 refs., 109 figs., 36 tabs

  6. A Comparison of Urge Intensity and the Probability of Tic Completion During Tic Freely and Tic Suppression Conditions.

    Science.gov (United States)

    Specht, Matt W; Nicotra, Cassandra M; Kelly, Laura M; Woods, Douglas W; Ricketts, Emily J; Perry-Parrish, Carisa; Reynolds, Elizabeth; Hankinson, Jessica; Grados, Marco A; Ostrander, Rick S; Walkup, John T

    2014-03-01

    Tic-suppression-based treatments (TSBTs) represent a safe and effective treatment option for Chronic Tic Disorders (CTDs). Prior research has demonstrated that treatment naive youths with CTDs have the capacity to safely and effectively suppress tics for prolonged periods. It remains unclear how tic suppression is achieved. The current study principally examines how effective suppression is achieved and preliminary correlates of the ability to suppress tics. Twelve youths, ages 10 to 17 years, with moderate-to-marked CTDs participated in an alternating sequence of tic freely and reinforced tic suppression conditions during which urge intensity and tic frequency were frequently assessed. Probability of tics occurring was half as likely following high-intensity urges during tic suppression (31%) in contrast to low-intensity urges during tic freely conditions (60%). Age was not associated with ability to suppress. Intelligence indices were associated with or trended toward greater ability to suppress tics. Attention difficulties were not associated with ability to suppress but were associated with tic severity. In contrast to our "selective suppression" hypothesis, we found participants equally capable of suppressing their tics regardless of urge intensity during reinforced tic suppression. Tic suppression was achieved with an "across-the-board" effort to resist urges. Preliminary data suggest that ability to suppress may be associated with general cognitive variables rather than age, tic severity, urge severity, and attention. Treatment naive youths appear to possess a capacity for robust tic suppression. TSBTs may bolster these capacities and/or enable their broader implementation, resulting in symptom improvement. © The Author(s) 2014.

  7. Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    Two different probability distributions are both known in the literature as "the" noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution can be described by an urn model without replacement with bias. Fisher's noncentral hypergeometric distribution...... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...... distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....

  8. Calculation of probability density functions for temperature and precipitation change under global warming

    International Nuclear Information System (INIS)

    Watterson, Ian G.

    2007-01-01

    Full text: he IPCC Fourth Assessment Report (Meehl ef al. 2007) presents multi-model means of the CMIP3 simulations as projections of the global climate change over the 21st century under several SRES emission scenarios. To assess the possible range of change for Australia based on the CMIP3 ensemble, we can follow Whetton etal. (2005) and use the 'pattern scaling' approach, which separates the uncertainty in the global mean warming from that in the local change per degree of warming. This study presents several ways of representing these two factors as probability density functions (PDFs). The beta distribution, a smooth, bounded, function allowing skewness, is found to provide a useful representation of the range of CMIP3 results. A weighting of models based on their skill in simulating seasonal means in the present climate over Australia is included. Dessai ef al. (2005) and others have used Monte-Carlo sampling to recombine such global warming and scaled change factors into values of net change. Here, we use a direct integration of the product across the joint probability space defined by the two PDFs. The result is a cumulative distribution function (CDF) for change, for each variable, location, and season. The median of this distribution provides a best estimate of change, while the 10th and 90th percentiles represent a likely range. The probability of exceeding a specified threshold can also be extracted from the CDF. The presentation focuses on changes in Australian temperature and precipitation at 2070 under the A1B scenario. However, the assumption of linearity behind pattern scaling allows results for different scenarios and times to be simply obtained. In the case of precipitation, which must remain non-negative, a simple modification of the calculations (based on decreases being exponential with warming) is used to avoid unrealistic results. These approaches are currently being used for the new CSIRO/ Bureau of Meteorology climate projections

  9. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  10. Genetic evaluation of weaning weight and probability of lambing at 1 year of age in Targhee lambs

    Science.gov (United States)

    The objective of this study was to investigate genetic control of 120-day weaning weight and the probability of lambing at 1 year of age in Targhee ewe lambs. Records of 5,967 ewe lambs born from 1989 to 2012 and first exposed to rams for breeding at approximately 7 months of age were analyzed. Reco...

  11. Age dynamic of physical condition changes in pre-school age girls, schoolgirls and students, living in conditions of Eastern Siberia

    Directory of Open Access Journals (Sweden)

    V.Y. Lebedinskiy

    2017-12-01

    Full Text Available Purpose: to analyze dynamic of physical condition, considering sex (females and age of the tested, living in region with unfavorable ecology. Material: we studied pre school age girls (n=1580, age 4-7 years. In the research we did not include children with chronic diseases, who were under observation. We tested schoolgirls (n=3211, age 7-17 years and girl students (n=5827, age 17-21 years, 1-4 years of study. Girl students were divided into five age groups: from 17 to 21 years. All participants lived in conditions of Eastern Siberia (Irkutsk. This region is characterized by unfavorable ecology and climate geographic characteristics. Results: in dynamic of physical condition of pre-school girls, schoolgirls and students we marked out three substantial periods of it characteristics' changes. Age 7-8 years is critical (transition from 1st to 2nd stage. The least values of these characteristics are found in older (after 17-18 years ages. In students we observed relative stabilization of these indicators. Conclusions: the received results shall be considered in building physical education training process in pre-school educational establishments, secondary comprehensive schools and higher educational establishments.

  12. Orthogonal Algorithm of Logic Probability and Syndrome-Testable Analysis

    Institute of Scientific and Technical Information of China (English)

    1990-01-01

    A new method,orthogonal algoritm,is presented to compute the logic probabilities(i.e.signal probabilities)accurately,The transfer properties of logic probabilities are studied first,which are useful for the calculation of logic probability of the circuit with random independent inputs.Then the orthogonal algoritm is described to compute the logic probability of Boolean function realized by a combinational circuit.This algorithm can make Boolean function “ORTHOGONAL”so that the logic probabilities can be easily calculated by summing up the logic probabilities of all orthogonal terms of the Booleam function.

  13. Collision probability in two-dimensional lattice by ray-trace method and its applications to cell calculations

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro

    1985-03-01

    A series of formulations to evaluate collision probability for multi-region cells expressed by either of three one-dimensional coordinate systems (plane, sphere and cylinder) or by the general two-dimensional cylindrical coordinate system is presented. They are expressed in a suitable form to have a common numerical process named ''Ray-Trace'' method. Applications of the collision probability method to two optional treatments for the resonance absorption are presented. One is a modified table-look-up method based on the intermediate resonance approximation, and the other is a rigorous method to calculate the resonance absorption in a multi-region cell in which nearly continuous energy spectra of the resonance neutron range can be solved and interaction effect between different resonance nuclides can be evaluated. Two works on resonance absorption in a doubly heterogeneous system with grain structure are presented. First, the effect of a random distribution of particles embedded in graphite diluent on the resonance integral is studied. Next, the ''Accretion'' method proposed by Leslie and Jonsson to define the collision probability in a doubly heterogeneous system is applied to evaluate the resonance absorption in coated particles dispersed in fuel pellet of the HTGR. Several optional models are proposed to define the collision rates in the medium with the microscopic heterogeneity. By making use of the collision probability method developed by the present study, the JAERI thermal reactor standard nuclear design code system SRAC has been developed. Results of several benchmark tests for the SRAC are presented. The analyses of critical experiments of the SHE, DCA, and FNR show good agreement of critical masses with their experimental values. (J.P.N.)

  14. Calculation of age-dependent effective doses for external exposure using the MCNP code

    International Nuclear Information System (INIS)

    Hung, Tran Van

    2013-01-01

    Age-dependent effective dose for external exposure to photons uniformly distributed in air were calculated. Firstly, organ doses were calculated with a series of age-specific MIRD-5 type phantoms using the Monte Carlo code MCNP. The calculations were performed for mono-energetic photon sources with source energies from 10 keV to 5 MeV and for phantoms of newborn, 1, 5, 10, and 15 years-old and adult. Then, the effective doses to the different age-phantoms from the mono-energetic photon sources were estimated based on the obtained organ doses. From the calculated results, it is shown that the effective doses depend on the body size; the effective doses in younger phantoms are higher than those in the older phantoms, especially below 100 keV. (orig.)

  15. Calculation of age-dependent effective doses for external exposure using the MCNP code

    Energy Technology Data Exchange (ETDEWEB)

    Hung, Tran Van [Research and Development Center for Radiation Technology, ThuDuc, HoChiMinh City (VT)

    2013-07-15

    Age-dependent effective dose for external exposure to photons uniformly distributed in air were calculated. Firstly, organ doses were calculated with a series of age-specific MIRD-5 type phantoms using the Monte Carlo code MCNP. The calculations were performed for mono-energetic photon sources with source energies from 10 keV to 5 MeV and for phantoms of newborn, 1, 5, 10, and 15 years-old and adult. Then, the effective doses to the different age-phantoms from the mono-energetic photon sources were estimated based on the obtained organ doses. From the calculated results, it is shown that the effective doses depend on the body size; the effective doses in younger phantoms are higher than those in the older phantoms, especially below 100 keV. (orig.)

  16. Disadvantage factors for square lattice cells using a collision probability method

    International Nuclear Information System (INIS)

    Raghav, H.P.

    1976-01-01

    The flux distribution in an infinite square lattice consisting of cylindrical fuel rods and moderator is calculated by using a collision probability method. Neutrons are assumed to be monoenergetic and the sources as well as scattering are assumed to be isotropic. Carlvik's method for the calculation of collision probability is used. The important features of the method are that the square boundary is treated exactly and the contribution of the surrounding cells is calculated explicitly. The method is programmed in a computer code CELLC. This carries out integration by Simpson's rule. The convergence and accuracy of CELLC is assessed by computing disadvantage factors for the well-known Thie lattices and comparing the results with Monte Carlo and other integral transport theory methods used elsewhere. It is demonstrated that it is not correct to apply the white boundary condition in the Wigner Seitz Cell for low pitch and low cross sections. (orig.) [de

  17. Probabilistic Approach to Conditional Probability of Release of Hazardous Materials from Railroad Tank Cars during Accidents

    Science.gov (United States)

    2009-10-13

    This paper describes a probabilistic approach to estimate the conditional probability of release of hazardous materials from railroad tank cars during train accidents. Monte Carlo methods are used in developing a probabilistic model to simulate head ...

  18. Impact of Age on the Risk of Advanced Colorectal Neoplasia in a Young Population: An Analysis Using the Predicted Probability Model.

    Science.gov (United States)

    Jung, Yoon Suk; Park, Chan Hyuk; Kim, Nam Hee; Lee, Mi Yeon; Park, Dong Il

    2017-09-01

    The incidence of colorectal cancer is decreasing in adults aged ≥50 years and increasing in those aged probability models for ACRN in a population aged 30-49 years. Of 96,235 participants, 57,635 and 38,600 were included in the derivation and validation cohorts, respectively. The predicted probability model considered age, sex, body mass index, family history of colorectal cancer, and smoking habits, as follows: Y ACRN  = -8.755 + 0.080·X age  - 0.055·X male  + 0.041·X BMI  + 0.200·X family_history_of_CRC  + 0.218·X former_smoker  + 0.644·X current_smoker . The optimal cutoff value for the predicted probability of ACRN by Youden index was 1.14%. The area under the receiver-operating characteristic curve (AUROC) values of our model for ACRN were higher than those of the previously established Asia-Pacific Colorectal Screening (APCS), Korean Colorectal Screening (KCS), and Kaminski's scoring models [AUROC (95% confidence interval): model in the current study, 0.673 (0.648-0.697); vs. APCS, 0.588 (0.564-0.611), P probability model can assess the risk of ACRN more accurately than existing models, including the APCS, KCS, and Kaminski's scoring models.

  19. Collision Probabilities for Finite Cylinders and Cuboids

    Energy Technology Data Exchange (ETDEWEB)

    Carlvik, I

    1967-05-15

    Analytical formulae have been derived for the collision probabilities of homogeneous finite cylinders and cuboids. The formula for the finite cylinder contains double integrals, and the formula for the cuboid only single integrals. Collision probabilities have been calculated by means of the formulae and compared with values obtained by other authors. It was found that the calculations using the analytical formulae are much quicker and give higher accuracy than Monte Carlo calculations.

  20. On the calculation of steady-state loss probabilities in the GI/G/2/0 queue

    Directory of Open Access Journals (Sweden)

    Igor N. Kovalenko

    1994-01-01

    Full Text Available This paper considers methods for calculating the steady-state loss probability in the GI/G/2/0 queue. A previous study analyzed this queue in discrete time and this led to an efficient, numerical approximation scheme for continuous-time systems. The primary aim of the present work is to provide an alternative approach by analyzing the GI/ME/2/0 queue; i.e., assuming that the service time can be represented by a matrix-exponential distribution. An efficient computational scheme based on this method is developed and some numerical examples are studied. Some comparisons are made with the discrete-time approach, and the two methods are seen to be complementary.

  1. Probability of detecting perchlorate under natural conditions in deep groundwater in California and the Southwestern United States

    Science.gov (United States)

    Fram, Miranda S.; Belitz, Kenneth

    2011-01-01

    We use data from 1626 groundwater samples collected in California, primarily from public drinking water supply wells, to investigate the distribution of perchlorate in deep groundwater under natural conditions. The wells were sampled for the California Groundwater Ambient Monitoring and Assessment Priority Basin Project. We develop a logistic regression model for predicting probabilities of detecting perchlorate at concentrations greater than multiple threshold concentrations as a function of climate (represented by an aridity index) and potential anthropogenic contributions of perchlorate (quantified as an anthropogenic score, AS). AS is a composite categorical variable including terms for nitrate, pesticides, and volatile organic compounds. Incorporating water-quality parameters in AS permits identification of perturbation of natural occurrence patterns by flushing of natural perchlorate salts from unsaturated zones by irrigation recharge as well as addition of perchlorate from industrial and agricultural sources. The data and model results indicate low concentrations (0.1-0.5 μg/L) of perchlorate occur under natural conditions in groundwater across a wide range of climates, beyond the arid to semiarid climates in which they mostly have been previously reported. The probability of detecting perchlorate at concentrations greater than 0.1 μg/L under natural conditions ranges from 50-70% in semiarid to arid regions of California and the Southwestern United States to 5-15% in the wettest regions sampled (the Northern California coast). The probability of concentrations above 1 μg/L under natural conditions is low (generally <3%).

  2. Neutronic calculation of reactor cells

    International Nuclear Information System (INIS)

    Jaliff, J.O.

    1981-01-01

    Multigroup calculations of cylindrical pin cells were programmed, in Fortran IV, upon the basis of collision probabilities in each energy group. A rational approximation to the fuel-to-fuel collision probability in resonance groups was used. Together with the intermediate resonance theory, cross sections corrected for heterogeneity and absorber interactions were found. For the optimization of the program, the cell of a BWR reactor was taken as reference. Data for such a cell and the reactor's operating conditions are presented. PINCEL is a fast and flexible program, with checked results, around a 69-group library. (M.E.L.) [es

  3. The collision probability modules of WIMS-E

    International Nuclear Information System (INIS)

    Roth, M.J.

    1985-04-01

    This report describes how flat source first flight collision probabilities are calculated and used in the WIMS-E modular program. It includes a description of the input to the modules W-FLU, W-THES, W-PIP, W-PERS and W-MERGE. Input to other collision probability modules are described in separate reports. WIMS-E is capable of calculating collision probabilities in a wide variety of geometries, some of them quite complicated. It can also use them for a variety of purposes. (author)

  4. Gender and Ageing at Work in Chile: Employment, Working Conditions, Work-Life Balance and Health of Men and Women in an Ageing Workforce.

    Science.gov (United States)

    Vives, Alejandra; Gray, Nora; González, Francisca; Molina, Agustín

    2018-04-18

    workplace risks continue to be high into old age: intensive work and demanding physical work, especially in men, and the combination of paid and unpaid care work in women, which continues to be high up to the age of 70 years. The health of older workers is better than that of non-working people of the same age, a gap which is markedly larger for women than men and tends to increase among women as they age. Results indicate that Chileans working into old age face precarious jobs with limited protection and several adverse working conditions. Noteworthy, women carry the double burden of paid and unpaid work into their late years. In addition, results suggest they are affected more profoundly by the healthy worker effect whereby the health condition determines the probability of finding and keeping a job-also known as a health selection mechanism-which increases as they age. These employment and working conditions indicate that working into old age is not yet sustainable in Chile and counts as evidence that needs to be taken into account in discussions about delaying the retirement age in the country, as well as incorporating support systems to alleviate the double work burden of ageing working women.

  5. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  6. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  7. Calculation of age-dependent dose conversion coefficients for radionuclides uniformly distributed in air

    International Nuclear Information System (INIS)

    Hung, Tran Van; Satoh, Daiki; Takahashi, Fumiaki; Tsuda, Shuichi; Endo, Akira; Saito, Kimiaki; Yamaguchi, Yasuhiro

    2005-02-01

    Age-dependent dose conversion coefficients for external exposure to photons emitted by radionuclides uniformly distributed in air were calculated. The size of the source region in the calculation was assumed to be effectively semi-infinite in extent. Firstly, organ doses were calculated with a series of age-specific MIRD-5 type phantoms using MCNP code, a Monte Carlo transport code. The calculations were performed for mono-energetic photon sources of twelve energies from 10 keV to 5 MeV and for phantoms of newborn, 1, 5, 10 and 15 years, and adult. Then, the effective doses to the different age-phantoms from the mono-energetic photon sources were estimated based on the obtained organ doses. The calculated effective doses were used to interpolate the conversion coefficients of the effective doses for 160 radionuclides, which are important for dose assessment of nuclear facilities. In the calculation, energies and intensities of emitted photons from radionuclides were taken from DECDC, a recent compilation of decay data for radiation dosimetry developed at JAERI. The results are tabulated in the form of effective dose per unit concentration and time (Sv per Bq s m -3 ). (author)

  8. RZ calculations for self shielded multigroup cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Li, M.; Sanchez, R.; Zmijarevic, I.; Stankovski, Z. [Commissariat a l' Energie Atomique CEA, Direction de l' Energie Nucleaire, DEN/DM2S/SERMA/LENR, 91191 Gif-sur-Yvette Cedex (France)

    2006-07-01

    A collision probability method has been implemented for RZ geometries. The method accounts for white albedo, specular and translation boundary condition on the top and bottom surfaces of the geometry and for a white albedo condition on the outer radial surface. We have applied the RZ CP method to the calculation of multigroup self shielded cross sections for Gadolinia absorbers in BWRs. (authors)

  9. RZ calculations for self shielded multigroup cross sections

    International Nuclear Information System (INIS)

    Li, M.; Sanchez, R.; Zmijarevic, I.; Stankovski, Z.

    2006-01-01

    A collision probability method has been implemented for RZ geometries. The method accounts for white albedo, specular and translation boundary condition on the top and bottom surfaces of the geometry and for a white albedo condition on the outer radial surface. We have applied the RZ CP method to the calculation of multigroup self shielded cross sections for Gadolinia absorbers in BWRs. (authors)

  10. Upgrading Probability via Fractions of Events

    Directory of Open Access Journals (Sweden)

    Frič Roman

    2016-08-01

    Full Text Available The influence of “Grundbegriffe” by A. N. Kolmogorov (published in 1933 on education in the area of probability and its impact on research in stochastics cannot be overestimated. We would like to point out three aspects of the classical probability theory “calling for” an upgrade: (i classical random events are black-and-white (Boolean; (ii classical random variables do not model quantum phenomena; (iii basic maps (probability measures and observables { dual maps to random variables have very different “mathematical nature”. Accordingly, we propose an upgraded probability theory based on Łukasiewicz operations (multivalued logic on events, elementary category theory, and covering the classical probability theory as a special case. The upgrade can be compared to replacing calculations with integers by calculations with rational (and real numbers. Namely, to avoid the three objections, we embed the classical (Boolean random events (represented by the f0; 1g-valued indicator functions of sets into upgraded random events (represented by measurable {0; 1}-valued functions, the minimal domain of probability containing “fractions” of classical random events, and we upgrade the notions of probability measure and random variable.

  11. Nonlinear optimization method of ship floating condition calculation in wave based on vector

    Science.gov (United States)

    Ding, Ning; Yu, Jian-xing

    2014-08-01

    Ship floating condition in regular waves is calculated. New equations controlling any ship's floating condition are proposed by use of the vector operation. This form is a nonlinear optimization problem which can be solved using the penalty function method with constant coefficients. And the solving process is accelerated by dichotomy. During the solving process, the ship's displacement and buoyant centre have been calculated by the integration of the ship surface according to the waterline. The ship surface is described using an accumulative chord length theory in order to determine the displacement, the buoyancy center and the waterline. The draught forming the waterline at each station can be found out by calculating the intersection of the ship surface and the wave surface. The results of an example indicate that this method is exact and efficient. It can calculate the ship floating condition in regular waves as well as simplify the calculation and improve the computational efficiency and the precision of results.

  12. Beta-delayed fission and neutron emission calculations for the actinide cosmochronometers

    International Nuclear Information System (INIS)

    Meyer, B.S.; Howard, W.M.; Mathews, G.J.; Takahashi, K.; Moeller, P.; Leander, G.A.

    1989-01-01

    The Gamow-Teller beta-strength distributions for 19 neutron-rich nuclei, including ten of interest for the production of the actinide cosmochronometers, are computed microscopically with a code that treats nuclear deformation explicitly. The strength distributions are then used to calculate the beta-delayed fission, neutron emission, and gamma deexcitation probabilities for these nuclei. Fission is treated both in the complete damping and WKB approximations for penetrabilities through the nuclear potential-energy surface. The resulting fission probabilities differ by factors of 2 to 3 or more from the results of previous calculations using microscopically computed beta-strength distributions around the region of greatest interest for production of the cosmochronometers. The indications are that a consistent treatment of nuclear deformation, fission barriers, and beta-strength functions is important in the calculation of delayed fission probabilities and the production of the actinide cosmochronometers. Since we show that the results are very sensitive to relatively small changes in model assumptions, large chronometric ages for the Galaxy based upon high beta-delayed fission probabilities derived from an inconsistent set of nuclear data calculations must be considered quite uncertain

  13. URR [Unresolved Resonance Region] computer code: A code to calculate resonance neutron cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fissile and fertile nuclides

    International Nuclear Information System (INIS)

    Leal, L.C.; de Saussure, G.; Perez, R.B.

    1990-01-01

    The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling by Doppler broadened cross-sections. The various self-shielding factors are computer numerically as Lebesgue integrals over the cross-section probability tables

  14. Fitness prospects: effects of age, sex and recruitment age on reproductive value in a long-lived seabird.

    Science.gov (United States)

    Zhang, He; Rebke, Maren; Becker, Peter H; Bouwhuis, Sandra

    2015-01-01

    Reproductive value is an integrated measure of survival and reproduction fundamental to understanding life-history evolution and population dynamics, but little is known about intraspecific variation in reproductive value and factors explaining such variation, if any. By applying generalized additive mixed models to longitudinal individual-based data of the common tern Sterna hirundo, we estimated age-specific annual survival probability, breeding probability and reproductive performance, based on which we calculated age-specific reproductive values. We investigated effects of sex and recruitment age (RA) on each trait. We found age effects on all traits, with survival and breeding probability declining with age, while reproductive performance first improved with age before levelling off. We only found a very small, marginally significant, sex effect on survival probability, but evidence for decreasing age-specific breeding probability and reproductive performance with RA. As a result, males had slightly lower age-specific reproductive values than females, while birds of both sexes that recruited at the earliest ages of 2 and 3 years (i.e. 54% of the tern population) had somewhat higher fitness prospects than birds recruiting at later ages. While the RA effects on breeding probability and reproductive performance were statistically significant, these effects were not large enough to translate to significant effects on reproductive value. Age-specific reproductive values provided evidence for senescence, which came with fitness costs in a range of 17-21% for the sex-RA groups. Our study suggests that intraspecific variation in reproductive value may exist, but that, in the common tern, the differences are small. © 2014 The Authors. Journal of Animal Ecology © 2014 British Ecological Society.

  15. On the Possibility of Assigning Probabilities to Singular Cases, or: Probability Is Subjective Too!

    Directory of Open Access Journals (Sweden)

    Mark R. Crovelli

    2009-06-01

    Full Text Available Both Ludwig von Mises and Richard von Mises claimed that numerical probability could not be legitimately applied to singular cases. This paper challenges this aspect of the von Mises brothers’ theory of probability. It is argued that their denial that numerical probability could be applied to singular cases was based solely upon Richard von Mises’ exceptionally restrictive definition of probability. This paper challenges Richard von Mises’ definition of probability by arguing that the definition of probability necessarily depends upon whether the world is governed by time-invariant causal laws. It is argued that if the world is governed by time-invariant causal laws, a subjective definition of probability must be adopted. It is further argued that both the nature of human action and the relative frequency method for calculating numerical probabilities both presuppose that the world is indeed governed by time-invariant causal laws. It is finally argued that the subjective definition of probability undercuts the von Mises claim that numerical probability cannot legitimately be applied to singular, non-replicable cases.

  16. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  17. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  18. Collective fluctuations in magnetized plasma: Transition probability approach

    International Nuclear Information System (INIS)

    Sosenko, P.P.

    1997-01-01

    Statistical plasma electrodynamics is elaborated with special emphasis on the transition probability approach and quasi-particles, and on modern applications to magnetized plasmas. Fluctuation spectra in the magnetized plasma are calculated in the range of low frequencies (with respect to the cyclotron one), and the conditions for the transition from incoherent to collective fluctuations are established. The role of finite-Larmor-radius effects and particle polarization drift in such a transition is explained. The ion collective features in fluctuation spectra are studied. 63 refs., 30 figs

  19. Evidence reasoning method for constructing conditional probability tables in a Bayesian network of multimorbidity.

    Science.gov (United States)

    Du, Yuanwei; Guo, Yubin

    2015-01-01

    The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.

  20. Computing exact bundle compliance control charts via probability generating functions.

    Science.gov (United States)

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  1. Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.

    Science.gov (United States)

    Allen, Jeff; Ghattas, Andrew

    2016-06-01

    Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.

  2. URR [Unresolved Resonance Region] computer code: A code to calculate resonance neutron cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fissile and fertile nuclides

    International Nuclear Information System (INIS)

    Leal, L.C.; de Saussure, G.; Perez, R.B.

    1989-01-01

    The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self- indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling the Doppler broadened cross-section. The various shelf-shielded factors are computed numerically as Lebesgue integrals over the cross-section probability tables. 6 refs

  3. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability.

    Directory of Open Access Journals (Sweden)

    Michael R W Dawson

    Full Text Available Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent's environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned.

  4. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability

    Science.gov (United States)

    2017-01-01

    Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent’s environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned. PMID:28212422

  5. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability.

    Science.gov (United States)

    Dawson, Michael R W; Gupta, Maya

    2017-01-01

    Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent's environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned.

  6. Probability of crack-initiation and application to NDE

    Energy Technology Data Exchange (ETDEWEB)

    Prantl, G [Nuclear Safety Inspectorate HSK, (Switzerland)

    1988-12-31

    Fracture toughness is a property with a certain variability. When a statistical distribution is assumed, the probability of crack initiation may be calculated for a given problem defined by its geometry and the applied stress. Experiments have shown, that cracks which experience a certain small amount of ductile growth can reliably be detected by acoustic emission measurements. The probability of crack detection by AE-techniques may be estimated using this experimental finding and the calculated probability of crack initiation. (author).

  7. Improved process for calculating the probability of being hit by crashing aircraft by the Balfanz-model

    International Nuclear Information System (INIS)

    Hennings, W.

    1988-01-01

    For calculating the probability of being hit by crashing military aircraft on different buildings, a model was introduced, which has already been used in the conventional fields. In the context of converting the research reactor BER II, this model was also used in the nuclear field. The report introduces this model and shows the application to a vertical cylinder as an example. Compared to the previous model, an exact and also simpler solution of the model attempt for determining the shade surface for different shapes of buildings is derived. The problems of the distribution of crashes given by the previous model is treated via the vertical angle and an attempt to solve these problems is given. (orig./HP) [de

  8. The transmission probability method in one-dimensional cylindrical geometry

    International Nuclear Information System (INIS)

    Rubin, I.E.

    1983-01-01

    The collision probability method widely used in solving the problems of neutron transpopt in a reactor cell is reliable for simple cells with small number of zones. The increase of the number of zones and also taking into account the anisotropy of scattering greatly increase the scope of calculations. In order to reduce the time of calculation the transmission probability method is suggested to be used for flux calculation in one-dimensional cylindrical geometry taking into account the scattering anisotropy. The efficiency of the suggested method is verified using the one-group calculations for cylindrical cells. The use of the transmission probability method allows to present completely angular and spatial dependences is neutrons distributions without the increase in the scope of calculations. The method is especially effective in solving the multi-group problems

  9. VISA-2, Reactor Vessel Failure Probability Under Thermal Shock

    International Nuclear Information System (INIS)

    Simonen, F.; Johnson, K.

    1992-01-01

    1 - Description of program or function: VISA2 (Vessel Integrity Simulation Analysis) was developed to estimate the failure probability of nuclear reactor pressure vessels under pressurized thermal shock conditions. The deterministic portion of the code performs heat transfer, stress, and fracture mechanics calculations for a vessel subjected to a user-specified temperature and pressure transient. The probabilistic analysis performs a Monte Carlo simulation to estimate the probability of vessel failure. Parameters such as initial crack size and position, copper and nickel content, fluence, and the fracture toughness values for crack initiation and arrest are treated as random variables. Linear elastic fracture mechanics methods are used to model crack initiation and growth. This includes cladding effects in the heat transfer, stress, and fracture mechanics calculations. The simulation procedure treats an entire vessel and recognizes that more than one flaw can exist in a given vessel. The flaw model allows random positioning of the flaw within the vessel wall thickness, and the user can specify either flaw length or length-to-depth aspect ratio for crack initiation and arrest predictions. The flaw size distribution can be adjust on the basis of different inservice inspection techniques and inspection conditions. The toughness simulation model includes a menu of alternative equations for predicting the shift in the reference temperature of the nil-ductility transition. 2 - Method of solution: The solution method uses closed form equations for temperatures, stresses, and stress intensity factors. A polynomial fitting procedure approximates the specified pressure and temperature transient. Failure probabilities are calculated by a Monte Carlo simulation. 3 - Restrictions on the complexity of the problem: Maxima of 30 welds. VISA2 models only the belt-line (cylindrical) region of a reactor vessel. The stresses are a function of the radial (through-wall) coordinate only

  10. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  11. Concepts and Bounded Rationality: An Application of Niestegge's Approach to Conditional Quantum Probabilities

    Science.gov (United States)

    Blutner, Reinhard

    2009-03-01

    Recently, Gerd Niestegge developed a new approach to quantum mechanics via conditional probabilities developing the well-known proposal to consider the Lüders-von Neumann measurement as a non-classical extension of probability conditionalization. I will apply his powerful and rigorous approach to the treatment of concepts using a geometrical model of meaning. In this model, instances are treated as vectors of a Hilbert space H. In the present approach there are at least two possibilities to form categories. The first possibility sees categories as a mixture of its instances (described by a density matrix). In the simplest case we get the classical probability theory including the Bayesian formula. The second possibility sees categories formed by a distinctive prototype which is the superposition of the (weighted) instances. The construction of prototypes can be seen as transferring a mixed quantum state into a pure quantum state freezing the probabilistic characteristics of the superposed instances into the structure of the formed prototype. Closely related to the idea of forming concepts by prototypes is the existence of interference effects. Such inference effects are typically found in macroscopic quantum systems and I will discuss them in connection with several puzzles of bounded rationality. The present approach nicely generalizes earlier proposals made by authors such as Diederik Aerts, Andrei Khrennikov, Ricardo Franco, and Jerome Busemeyer. Concluding, I will suggest that an active dialogue between cognitive approaches to logic and semantics and the modern approach of quantum information science is mandatory.

  12. Total deposition of inhaled particles related to age: comparison with age-dependent model calculations

    International Nuclear Information System (INIS)

    Becquemin, M.H.; Bouchikhi, A.; Yu, C.P.; Roy, M.

    1991-01-01

    To compare experimental data with age-dependent model calculations, total airway deposition of polystyrene aerosols (1, 2.05 and 2.8 μm aerodynamic diameter) was measured in ten adults, twenty children aged 12 to 15 years, ten children aged 8 to 12, and eleven under 8 years old. Ventilation was controlled, and breathing patterns were appropriate for each age, either at rest or at light exercise. Individually, deposition percentages increased with particle size and also from rest to exercise, except in children under 12 years, in whom they decreased from 20-21.5 to 14-14.5 for 1 μm particles and from 36.8-36.9 to 32.2-33.1 for 2.05 μm particles. Comparisons with the age-dependent model showed that, at rest, the observed data concerning children agreed with those predicted and were close to the adults' values, when the latter were higher than predicted. At exercise, child data were lower than predicted and lower than adult experimental data, when the latter agreed fairly well with the model. (author)

  13. Bayesian noninferiority test for 2 binomial probabilities as the extension of Fisher exact test.

    Science.gov (United States)

    Doi, Masaaki; Takahashi, Fumihiro; Kawasaki, Yohei

    2017-12-30

    Noninferiority trials have recently gained importance for the clinical trials of drugs and medical devices. In these trials, most statistical methods have been used from a frequentist perspective, and historical data have been used only for the specification of the noninferiority margin Δ>0. In contrast, Bayesian methods, which have been studied recently are advantageous in that they can use historical data to specify prior distributions and are expected to enable more efficient decision making than frequentist methods by borrowing information from historical trials. In the case of noninferiority trials for response probabilities π 1 ,π 2 , Bayesian methods evaluate the posterior probability of H 1 :π 1 >π 2 -Δ being true. To numerically calculate such posterior probability, complicated Appell hypergeometric function or approximation methods are used. Further, the theoretical relationship between Bayesian and frequentist methods is unclear. In this work, we give the exact expression of the posterior probability of the noninferiority under some mild conditions and propose the Bayesian noninferiority test framework which can flexibly incorporate historical data by using the conditional power prior. Further, we show the relationship between Bayesian posterior probability and the P value of the Fisher exact test. From this relationship, our method can be interpreted as the Bayesian noninferior extension of the Fisher exact test, and we can treat superiority and noninferiority in the same framework. Our method is illustrated through Monte Carlo simulations to evaluate the operating characteristics, the application to the real HIV clinical trial data, and the sample size calculation using historical data. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Adjustment of the thermohydraulic NUCIRC 2.0 code to the present aging conditions of the Embalse nuclear power plant

    International Nuclear Information System (INIS)

    Rabiti, Arnaldo; Coutsiers, Ernesto; Schivo, Miguel; Mazanttini, Oscar

    2003-01-01

    This work gives a description of the adjustment process of NUCIRC code to the actual aging conditions of Embalse nuclear power plant. For this adjustment the flow of the fuel channels of the primary heat transport system (PHTS) is calculated using the channel heat balance flow (CHBF) methodology. Then roughness and the localized loss of charge are modified in NUCIRC code for different groups of channels. These adjustments are done in way to fit by regions the channels flows calculated with NUCIRC to the CHBF flows. The fitting results in a discrepancy by regions of less than 0,1% and an average quadratic error of 5% approximately. These values indicate that the code NUCIRC is right adjusted for critical channel power calculations and aging tracking of PHTS. (author)

  15. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  16. Dynamic Forecasting Conditional Probability of Bombing Attacks Based on Time-Series and Intervention Analysis.

    Science.gov (United States)

    Li, Shuying; Zhuang, Jun; Shen, Shifei

    2017-07-01

    In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time-series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time-series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two-month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time-series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014. © 2016 Society for Risk Analysis.

  17. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    Science.gov (United States)

    Guler Yigitoglu, Askin

    multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.

  18. Ageing evaluation model of nuclear reactors structural elements

    International Nuclear Information System (INIS)

    Ziliukas, A.; Jutas, A.; Leisis, V.

    2002-01-01

    In this article the estimation of non-failure probability by random faults on the structural elements of nuclear reactors is presented. Ageing is certainly a significant factor in determining the limits of nuclear plant lifetime or life extensions. Usually the non failure probability rates failure intensity, which is characteristic for structural elements ageing in nuclear reactors. In practice the reliability is increased incorrectly because not all failures are fixed and cumulated. Therefore, the methodology with using the fine parameter of the failures flow is described. The comparison of non failure probability and failures flow is carried out. The calculation of these parameters in the practical example is shown too. (author)

  19. MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM

    Directory of Open Access Journals (Sweden)

    A. G. Korobeynikov

    2015-05-01

    Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed

  20. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  1. Failure frequencies and probabilities applicable to BWR and PWR piping

    International Nuclear Information System (INIS)

    Bush, S.H.; Chockie, A.D.

    1996-03-01

    This report deals with failure probabilities and failure frequencies of nuclear plant piping and the failure frequencies of flanges and bellows. Piping failure probabilities are derived from Piping Reliability Analysis Including Seismic Events (PRAISE) computer code calculations based on fatigue and intergranular stress corrosion as failure mechanisms. Values for both failure probabilities and failure frequencies are cited from several sources to yield a better evaluation of the spread in mean and median values as well as the widths of the uncertainty bands. A general conclusion is that the numbers from WASH-1400 often used in PRAs are unduly conservative. Failure frequencies for both leaks and large breaks tend to be higher than would be calculated using the failure probabilities, primarily because the frequencies are based on a relatively small number of operating years. Also, failure probabilities are substantially lower because of the probability distributions used in PRAISE calculations. A general conclusion is that large LOCA probability values calculated using PRAISE will be quite small, on the order of less than 1E-8 per year (<1E-8/year). The values in this report should be recognized as having inherent limitations and should be considered as estimates and not absolute values. 24 refs 24 refs

  2. Logic, probability, and human reasoning.

    Science.gov (United States)

    Johnson-Laird, P N; Khemlani, Sangeet S; Goodwin, Geoffrey P

    2015-04-01

    This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn - not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Calculation of Environmental Conditions in NEK Intermediate Building Following HELB

    International Nuclear Information System (INIS)

    Grgic, D.; Spalj, S.; Basic, I.

    1998-01-01

    The purpose of Equipment Qualification (EQ) in nuclear power plants is to ensure the capability of safety related equipment to perform its function on demand under postulated service conditions, including harsh accident environment (e.g. Loss of Coolant Accident - LOCA, High Energy Line Break - HELB). The determination of the EQ conditions and zones is one of the basic steps in the frame of the overall EQ project. The EQ parameters (temperature, pressure, relative humidity, chemical spray, submergence, radiation) should be defined for all locations of the plant containing equipment important to safety. This paper presents the calculation of thermohydraulic environmental parameters (pressure and temperature) inside Intermediate Building (IB) of Krsko NPP after the postulated HELB. The RELAP5/mod2 computer code was used for the determination of HELB mass and energy release and computer code GOTHIC was used to calculate pressure and temperature profiles inside NPP Krsko IB. (author)

  4. Working conditions in mid-life and mental health in older ages.

    Science.gov (United States)

    Wahrendorf, Morten; Blane, David; Bartley, Mel; Dragano, Nico; Siegrist, Johannes

    2013-03-01

    This article illustrates the importance of previous working conditions during mid-life (between 40 and 55) for mental health among older retired men and women (60 or older) across 13 European countries. We link information on health from the second wave (2006-2007) of the Survey of Health, Ageing and Retirement in Europe (SHARE) with information on respondents' working life collected retrospectively in the SHARELIFE interview (2008-2009). To measure working conditions, we rely on core assumptions of existing theoretical models of work stress (the demand-control-support and the effort-reward imbalance model) and distinguish four types of unhealthy working conditions: (1) a stressful psychosocial work environment (as assessed by the two work stress models) (2) a disadvantaged occupational position throughout the whole period of mid-life, (3) experience of involuntary job loss, and (4) exposure to job instability. Health after labour market exit is measured using depressive symptoms, as measured by the EURO-D depression scale. Main results show that men and women who experienced psychosocial stress at work or had low occupational positions during mid-life had significantly higher probabilities of high depressive symptoms during retirement. Additionally, men with unstable working careers and an involuntary job loss were at higher risks to report high depressive symptoms in later life. These associations remain significant after controlling for workers' health and social position prior mid-life. These findings support the assumption that mental health of retirees who experienced poor working conditions during mid-life is impaired. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  6. Simulations of Probabilities for Quantum Computing

    Science.gov (United States)

    Zak, M.

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.

  7. A method for estimating failure rates for low probability events arising in PSA

    International Nuclear Information System (INIS)

    Thorne, M.C.; Williams, M.M.R.

    1995-01-01

    The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary

  8. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  9. A hierarchical procedure for calculation of risk importance measures

    International Nuclear Information System (INIS)

    Poern, K.; Dinsmore, S.C.

    1987-01-01

    Starting with a general importance definition based on conditional probabilities, a hierarchical process for calculating risk importance measures from a PSA's numerical results is developed. By the appropriate choice of events in the general definition, measures such as the risk achievement worth and the risk reduction worth can be calculated without requantifying the PSA's models. Required approximations are clearly defined and the subsequent constraints on the applicability of the process discussed. (orig.)

  10. Accelerated aging of AP/HTPB propellants and the influence of various environmental aging conditions

    NARCIS (Netherlands)

    Keizers, H.L.J.

    1995-01-01

    Preliminary resuits on accelerated aging of lab-scale produced AP/HTPB propellant and propellants from dissectioned rocket motors are discussed, including aging logic, storage conditions, test techniques and resuits on mechanical, ballistic and safety testing. The mam aging effect observed was

  11. Prediction suppression in monkey inferotemporal cortex depends on the conditional probability between images.

    Science.gov (United States)

    Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R

    2016-01-01

    When monkeys view two images in fixed sequence repeatedly over days and weeks, neurons in area TE of the inferotemporal cortex come to exhibit prediction suppression. The trailing image elicits only a weak response when presented following the leading image that preceded it during training. Induction of prediction suppression might depend either on the contiguity of the images, as determined by their co-occurrence and captured in the measure of joint probability P(A,B), or on their contingency, as determined by their correlation and as captured in the measures of conditional probability P(A|B) and P(B|A). To distinguish between these possibilities, we measured prediction suppression after imposing training regimens that held P(A,B) constant but varied P(A|B) and P(B|A). We found that reducing either P(A|B) or P(B|A) during training attenuated prediction suppression as measured during subsequent testing. We conclude that prediction suppression depends on contingency, as embodied in the predictive relations between the images, and not just on contiguity, as embodied in their co-occurrence. Copyright © 2016 the American Physiological Society.

  12. Incorporation of Collision Probability Method in STREAM to Consider Non-uniform Material Composition in Fuel Subregions

    International Nuclear Information System (INIS)

    Choi, Sooyoung; Choe, Jiwon; Lee, Deokjung

    2016-01-01

    STREAM uses a pin-based slowing-down method (PSM) which solves pointwise energy slowing-down problems with sub-divided fuel pellet, and shows a great performance in calculating effective cross-section (XS). Various issues in the conventional resonance treatment methods (i.e., approximations on resonance scattering source, resonance interference effect, and intrapellet self-shielding effect) were successfully resolved by PSM. PSM assumes that a fuel rod has a uniform material composition and temperature even though PSM calculates spatially dependent effective XSs of fuel subregions. When the depletion calculation or thermal/hydraulic (T/H) coupling are performed with sub-divided material meshes, each subregion has its own material condition depending on position. It was reported that the treatment of distributed temperature is important to calculate an accurate fuel temperature coefficient (FTC). In order to avoid the approximation in PSM, the collision probability method (CPM) has been incorporated as a calculation option. The resonance treatment method, PSM, used in the transport code STREAM has been enhanced to accurately consider a non-uniform material condition. The method incorporates CPM in computing collision probability of isolated fuel pin. From numerical tests with pin-cell problems, STREAM with the method showed very accurate multiplication factor and FTC results less than 83 pcm and 1.43 % differences from the references, respectively. The original PSM showed larger differences than the proposed method but still has a high accuracy

  13. Incorporation of Collision Probability Method in STREAM to Consider Non-uniform Material Composition in Fuel Subregions

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sooyoung; Choe, Jiwon; Lee, Deokjung [UNIST, Ulsan (Korea, Republic of)

    2016-10-15

    STREAM uses a pin-based slowing-down method (PSM) which solves pointwise energy slowing-down problems with sub-divided fuel pellet, and shows a great performance in calculating effective cross-section (XS). Various issues in the conventional resonance treatment methods (i.e., approximations on resonance scattering source, resonance interference effect, and intrapellet self-shielding effect) were successfully resolved by PSM. PSM assumes that a fuel rod has a uniform material composition and temperature even though PSM calculates spatially dependent effective XSs of fuel subregions. When the depletion calculation or thermal/hydraulic (T/H) coupling are performed with sub-divided material meshes, each subregion has its own material condition depending on position. It was reported that the treatment of distributed temperature is important to calculate an accurate fuel temperature coefficient (FTC). In order to avoid the approximation in PSM, the collision probability method (CPM) has been incorporated as a calculation option. The resonance treatment method, PSM, used in the transport code STREAM has been enhanced to accurately consider a non-uniform material condition. The method incorporates CPM in computing collision probability of isolated fuel pin. From numerical tests with pin-cell problems, STREAM with the method showed very accurate multiplication factor and FTC results less than 83 pcm and 1.43 % differences from the references, respectively. The original PSM showed larger differences than the proposed method but still has a high accuracy.

  14. Simple method of calculating the transient thermal performance of composite material and its applicable condition

    Institute of Scientific and Technical Information of China (English)

    张寅平; 梁新刚; 江忆; 狄洪发; 宁志军

    2000-01-01

    Degree of mixing of composite material is defined and the condition of using the effective thermal diffusivity for calculating the transient thermal performance of composite material is studied. The analytical result shows that for a prescribed precision of temperature, there is a condition under which the transient temperature distribution in composite material can be calculated by using the effective thermal diffusivity. As illustration, for the composite material whose temperatures of both ends are constant, the condition is presented and the factors affecting the relative error of calculated temperature of composite materials by using effective thermal diffusivity are discussed.

  15. Calculation of Activation Energy by OIT Method for aging evaluation of NPP cable

    International Nuclear Information System (INIS)

    Park, Kyung-Heun; Kim, Jong-Seog; Cho, Bok-Gee

    2006-01-01

    Extending the lifetime of nuclear power plant is one of the most important concerns in the world nuclear industry. Cables are one of the long live items which have not been considered to be replaced during the design life of NPP. In order to simulate the natural aging in nuclear power plant, a study on accelerated aging needs to be conducted and to carry out the accelerated aging test, we must calculate the activation energy of the cable if we don't have the activation energy information. The activation energy is the most important element and it can be calculated by indentor modulus and elongation data and so on. But there is often only a limited quantity of material available in the deposit for testing, so it is important the samples for any destructive test are conserved as much as possible. But if there is only a limited quantity of the material, OIT(Oxidation Induction Time) is very useful with calculating activation energy and evaluation of the cable lifetime

  16. Prediction of accident sequence probabilities in a nuclear power plant due to earthquake events

    International Nuclear Information System (INIS)

    Hudson, J.M.; Collins, J.D.

    1980-01-01

    This paper presents a methodology to predict accident probabilities in nuclear power plants subject to earthquakes. The resulting computer program accesses response data to compute component failure probabilities using fragility functions. Using logical failure definitions for systems, and the calculated component failure probabilities, initiating event and safety system failure probabilities are synthesized. The incorporation of accident sequence expressions allows the calculation of terminal event probabilities. Accident sequences, with their occurrence probabilities, are finally coupled to a specific release category. A unique aspect of the methodology is an analytical procedure for calculating top event probabilities based on the correlated failure of primary events

  17. Probable Inference and Quantum Mechanics

    International Nuclear Information System (INIS)

    Grandy, W. T. Jr.

    2009-01-01

    In its current very successful interpretation the quantum theory is fundamentally statistical in nature. Although commonly viewed as a probability amplitude whose (complex) square is a probability, the wavefunction or state vector continues to defy consensus as to its exact meaning, primarily because it is not a physical observable. Rather than approach this problem directly, it is suggested that it is first necessary to clarify the precise role of probability theory in quantum mechanics, either as applied to, or as an intrinsic part of the quantum theory. When all is said and done the unsurprising conclusion is that quantum mechanics does not constitute a logic and probability unto itself, but adheres to the long-established rules of classical probability theory while providing a means within itself for calculating the relevant probabilities. In addition, the wavefunction is seen to be a description of the quantum state assigned by an observer based on definite information, such that the same state must be assigned by any other observer based on the same information, in much the same way that probabilities are assigned.

  18. Pipe failure probability - the Thomas paper revisited

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    2000-01-01

    Almost twenty years ago, in Volume 2 of Reliability Engineering (the predecessor of Reliability Engineering and System Safety), a paper by H. M. Thomas of Rolls Royce and Associates Ltd. presented a generalized approach to the estimation of piping and vessel failure probability. The 'Thomas-approach' used insights from actual failure statistics to calculate the probability of leakage and conditional probability of rupture given leakage. It was intended for practitioners without access to data on the service experience with piping and piping system components. This article revisits the Thomas paper by drawing on insights from development of a new database on piping failures in commercial nuclear power plants worldwide (SKI-PIPE). Partially sponsored by the Swedish Nuclear Power Inspectorate (SKI), the R and D leading up to this note was performed during 1994-1999. Motivated by data requirements of reliability analysis and probabilistic safety assessment (PSA), the new database supports statistical analysis of piping failure data. Against the background of this database development program, the article reviews the applicability of the 'Thomas approach' in applied risk and reliability analysis. It addresses the question whether a new and expanded database on the service experience with piping systems would alter the original piping reliability correlation as suggested by H. M. Thomas

  19. Preliminary Evaluation of the Effects of Buried Volcanoes on Estimates of Volcano Probability for the Proposed Repository Site at Yucca Mountain, Nevada

    Science.gov (United States)

    Hill, B. E.; La Femina, P. C.; Stamatakos, J.; Connor, C. B.

    2002-12-01

    Probability models that calculate the likelihood of new volcano formation in the Yucca Mountain (YM) area depend on the timing and location of past volcanic activity. Previous spatio-temporal patterns indicated a 10-4 to 10-3 probability of volcanic disruption of the proposed radioactive waste repository site at YM during the 10,000 year post-closure performance period (Connor et al. 2000, JGR 105:1). A recent aeromagnetic survey (Blakely et al. 2000, USGS OFR 00-188), however, identified up to 20 anomalies in alluvium-filled basins, which have characteristics indicative of buried basalt (O'Leary et al. 2002, USGS OFR 02-020). Independent evaluation of these data, combined with new ground magnetic surveys, shows that these anomalies may represent at least ten additional buried basaltic volcanoes, which have not been included in previous probability calculations. This interpretation, if true, nearly doubles the number of basaltic volcanoes within 30 km [19 mi] of YM. Moreover, the magnetic signature of about half of the recognized basaltic volcanoes in the YM area cannot be readily identified in areas where bedrock also produces large amplitude magnetic anomalies, suggesting that additional volcanoes may be present but undetected in the YM area. In the absence of direct age information, we evaluate the potential effects of alternative age assumptions on spatio-temporal probability models. Interpreted burial depths of >50 m [164 ft] suggest ages >2 Ma, based on sedimentation rates typical for these alluvial basins (Stamatakos et al., 1997, J. Geol. 105). Defining volcanic events as individual points, previous probability models generally used recurrence rates of 2-5 volcanoes/million years (v/Myr). If the identified anomalies are buried volcanoes that are all >5 Ma or uniformly distributed between 2-10 Ma, calculated probabilities of future volcanic disruption at YM change by <30%. However, a uniform age distribution between 2-5 Ma for the presumed buried volcanoes

  20. Effects of boundary conditions on thermomechanical calculations: Spent fuel test - climax

    International Nuclear Information System (INIS)

    Butkovich, T.R.

    1982-10-01

    The effects of varying certain boundary conditions on the results of finite-element calculations were studied in relation to the Spent Fuel Test - Climax. The study employed a thermomechanical model with the ADINA structural analysis. Nodal temperature histories were generated with the compatible ADINAT heat flow codes. The boundary conditions studied included: (1) The effect of boundary loading on three progressively larger meshes. (2) Plane strain vs plane stress conditions. (3) The effect of isothermal boundaries on a small mesh and on a significantly larger mesh. The results showed that different mesh sizes had an insignificant effect on isothermal boundaries up to 5 y, while on the smallest and largest mesh, the maximum temperature difference in the mesh was 0 C. In the corresponding ADINA calculation, these different mesh sizes produce insignificant changes in the stress field and displacements in the region of interest near the heat sources and excavations. On the other hand, plane stress produces horizontal and vertical stress differences approx. 9% higher than does plane strain

  1. Cross Check of NOvA Oscillation Probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Parke, Stephen J. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). Dept. of Theoretical Physics; Messier, Mark D. [Indiana Univ., Bloomington, IN (United States). Dept. of Physics

    2018-01-12

    In this note we perform a cross check of the programs used by NOvA to calculate the 3-flavor oscillation probabilities with a independent program using a different method. The comparison is performed at 6 significant figures and the agreement, $|\\Delta P|/P$ is better than $10^{-5}$, as good as can be expected with 6 significant figures. In addition, a simple and accurate alternative method to calculate the oscillation probabilities is outlined and compared in the L/E range and matter density relevant for the NOvA experiment.

  2. Implementation of decommissioning materials conditional clearance process to the OMEGA calculation code

    International Nuclear Information System (INIS)

    Zachar, Matej; Necas, Vladimir; Daniska, Vladimir

    2011-01-01

    The activities performed during nuclear installation decommissioning process inevitably lead to the production of large amount of radioactive material to be managed. Significant part of materials has such low radioactivity level that allows them to be released to the environment without any restriction for further use. On the other hand, for materials with radioactivity slightly above the defined unconditional clearance level, there is a possibility to release them conditionally for a specific purpose in accordance with developed scenario assuring that radiation exposure limits for population not to be exceeded. The procedure of managing such decommissioning materials, mentioned above, could lead to recycling and reuse of more solid materials and to save the radioactive waste repository volume. In the paper an a implementation of the process of conditional release to the OMEGA Code is analyzed in details; the Code is used for calculation of decommissioning parameters. The analytical approach in the material parameters assessment, firstly, assumes a definition of radiological limit conditions, based on the evaluation of possible scenarios for conditionally released materials, and their application to appropriate sorter type in existing material and radioactivity flow system. Other calculation procedures with relevant technological or economical parameters, mathematically describing e.g. final radiation monitoring or transport outside the locality, are applied to the OMEGA Code in the next step. Together with limits, new procedures creating independent material stream allow evaluation of conditional material release process during decommissioning. Model calculations evaluating various scenarios with different input parameters and considering conditional release of materials to the environment are performed to verify the implemented methodology. Output parameters and results of the model assessment are presented, discussed and conduced in the final part of the paper

  3. Probabilities for profitable fungicide use against gray leaf spot in hybrid maize.

    Science.gov (United States)

    Munkvold, G P; Martinson, C A; Shriver, J M; Dixon, P M

    2001-05-01

    ABSTRACT Gray leaf spot, caused by the fungus Cercospora zeae-maydis, causes considerable yield losses in hybrid maize grown in the north-central United States and elsewhere. Nonchemical management tactics have not adequately prevented these losses. The probability of profitably using fungicide application as a management tool for gray leaf spot was evaluated in 10 field experiments under conditions of natural inoculum in Iowa. Gray leaf spot severity in untreated control plots ranged from 2.6 to 72.8% for the ear leaf and from 3.0 to 7.7 (1 to 9 scale) for whole-plot ratings. In each experiment, fungicide applications with propiconazole or mancozeb significantly reduced gray leaf spot severity. Fungicide treatment significantly (P calculate for each experiment the probability of achieving a positive net return with one or two propiconazole applications, based on the mean yields and standard deviations for treated and untreated plots, the price of grain, and the costs of the fungicide applications. For one application, the probability ranged from approximately 0.06 to more than 0.99, and exceeded 0.50 in six of nine scenarios (specific experiment/hybrid). The highest probabilities occurred in the 1995 experiments with the most susceptible hybrid. Probabilities were almost always higher for a single application of propiconazole than for two applications. These results indicate that a single application of propiconazole frequently can be profitable for gray leaf spot management in Iowa, but the probability of a profitable application is strongly influenced by hybrid susceptibility. The calculation of probabilities for positive net returns was more informative than mean separation in terms of assessing the economic success of the fungicide applications.

  4. Schema bias in source monitoring varies with encoding conditions: support for a probability-matching account.

    Science.gov (United States)

    Kuhlmann, Beatrice G; Vaterrodt, Bianca; Bayen, Ute J

    2012-09-01

    Two experiments examined reliance on schematic knowledge in source monitoring. Based on a probability-matching account of source guessing, a schema bias will only emerge if participants do not have a representation of the source-item contingency in the study list, or if the perceived contingency is consistent with schematic expectations. Thus, the account predicts that encoding conditions that affect contingency detection also affect schema bias. In Experiment 1, the schema bias commonly found when schematic information about the sources is not provided before encoding was diminished by an intentional source-memory instruction. In Experiment 2, the depth of processing of schema-consistent and schema-inconsistent source-item pairings was manipulated. Participants consequently overestimated the occurrence of the pairing type they processed in a deep manner, and their source guessing reflected this biased contingency perception. Results support the probability-matching account of source guessing. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  5. Escape probabilities for fluorescent x-rays

    International Nuclear Information System (INIS)

    Dance, D.R.; Day, G.J.

    1985-01-01

    Computation of the energy absorption efficiency of an x-ray photon detector involves consideration of the histories of the secondary particles produced in any initial or secondary interaction which may occur within the detector. In particular, the K or higher shell fluorescent x-rays which may be emitted following a photoelectric interaction can carry away a large fraction of the energy of the incident photon, especially if this energy is just above an absorption edge. The effects of such photons cannot be ignored and a correction term, depending upon the probability that the fluorescent x-rays will escape from the detector, must be applied to the energy absorption efficiency. For detectors such as x-ray intensifying screens, it has been usual to calculate this probability by numerical integration. In this note analytic expressions are derived for the escape probability of fluorescent photons from planar detectors in terms of exponential integral functions. Rational approximations for these functions are readily available and these analytic expressions therefore facilitate the computation of photon absorption efficiencies. A table is presented which should obviate the need for calculating the escape probability for most cases of interest. (author)

  6. A Monte Carlo calculation of the pionium break-up probability with different sets of pionium target cross sections

    International Nuclear Information System (INIS)

    Santamarina, C; Schumann, M; Afanasyev, L G; Heim, T

    2003-01-01

    Chiral perturbation theory predicts the lifetime of pionium, a hydrogen-like π + π - atom, to better than 3% precision. The goal of the DIRAC experiment at CERN is to obtain and check this value experimentally by measuring the break-up probability of pionium in a target. In order to accurately measure the lifetime one needs to know the relationship between the break-up probability and the lifetime to 1% accuracy. We have obtained this dependence by modelling the evolution of pionic atoms in the target using Monte Carlo methods. The model relies on the computation of the pionium-target-atom interaction cross sections. Three different sets of pionium-target cross sections with varying degrees of complexity were used: from the simplest first-order Born approximation involving only the electrostatic interaction to a more advanced approach, taking into account multiphoton exchanges and relativistic effects. We conclude that, in order to obtain the pionium lifetime to 1% accuracy from the break-up probability, the pionium-target cross sections must be known with the same accuracy for the low excited bound states of the pionic atom. This result has been achieved, for low Z targets, with the two most precise cross section sets. For large Z targets only the set accounting for multiphoton exchange satisfies the condition

  7. Quantum processes: probability fluxes, transition probabilities in unit time and vacuum vibrations

    International Nuclear Information System (INIS)

    Oleinik, V.P.; Arepjev, Ju D.

    1989-01-01

    Transition probabilities in unit time and probability fluxes are compared in studying the elementary quantum processes -the decay of a bound state under the action of time-varying and constant electric fields. It is shown that the difference between these quantities may be considerable, and so the use of transition probabilities W instead of probability fluxes Π, in calculating the particle fluxes, may lead to serious errors. The quantity W represents the rate of change with time of the population of the energy levels relating partly to the real states and partly to the virtual ones, and it cannot be directly measured in experiment. The vacuum background is shown to be continuously distorted when a perturbation acts on a system. Because of this the viewpoint of an observer on the physical properties of real particles continuously varies with time. This fact is not taken into consideration in the conventional theory of quantum transitions based on using the notion of probability amplitude. As a result, the probability amplitudes lose their physical meaning. All the physical information on quantum dynamics of a system is contained in the mean values of physical quantities. The existence of considerable differences between the quantities W and Π permits one in principle to make a choice of the correct theory of quantum transitions on the basis of experimental data. (author)

  8. [Probabilities cannot be calculated retrospectively--not even in the courtroom].

    Science.gov (United States)

    van Gijn, J

    2005-12-24

    Chance events are part of everyday life, but coincidence of diseases often raises suspicions about hidden causes, for example when power lines are blamed for the geographical clustering of cancer. Recently, criminal procedures in the Netherlands have revolved around the question of whether statistical 'predictions' are a valid reason to hold a hospital nurse accountable for the occurrence of excess deaths during her duty hours, or a kindergarten employee for unexplained respiratory problems in several infants. In both cases, the appeals court judges did not accept the statistical 'argument' in the absence of other evidence. In the UK, however, Sally Clark's initial life sentence for the double murder of her 2 babies was largely based on 'probabilities in retrospect', put forward by the paediatrician Sir Roy Meadow as an expert witness. 4 years later she was acquitted, whereas Meadow was struck off the medical register on a charge of professional misconduct. There is no Bayesian or other mathematical solution to the problem of chance events. Only the detection of causal factors that are plausible and supported by new evidence can help to reinterpret coincidences as relationships. Scrupulous reasoning about probabilities is required, not only of physicians but also of judges and politicians.

  9. The Importance of Conditional Probability in Diagnostic Reasoning and Clinical Decision Making: A Primer for the Eye Care Practitioner.

    Science.gov (United States)

    Sanfilippo, Paul G; Hewitt, Alex W; Mackey, David A

    2017-04-01

    To outline and detail the importance of conditional probability in clinical decision making and discuss the various diagnostic measures eye care practitioners should be aware of in order to improve the scope of their clinical practice. We conducted a review of the importance of conditional probability in diagnostic testing for the eye care practitioner. Eye care practitioners use diagnostic tests on a daily basis to assist in clinical decision making and optimizing patient care and management. These tests provide probabilistic information that can enable the clinician to increase (or decrease) their level of certainty about the presence of a particular condition. While an understanding of the characteristics of diagnostic tests are essential to facilitate proper interpretation of test results and disease risk, many practitioners either confuse or misinterpret these measures. In the interests of their patients, practitioners should be aware of the basic concepts associated with diagnostic testing and the simple mathematical rule that underpins them. Importantly, the practitioner needs to recognize that the prevalence of a disease in the population greatly determines the clinical value of a diagnostic test.

  10. Calculating method on human error probabilities considering influence of management and organization

    International Nuclear Information System (INIS)

    Gao Jia; Huang Xiangrui; Shen Zupei

    1996-01-01

    This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach

  11. HELIOS: transformation laws for multiple-collision probabilities with angular dependence

    International Nuclear Information System (INIS)

    Villarino, E.A.; Stamm'ler, R.J.J.

    1996-01-01

    In the lattice code HELIOS, neutron and gamma transport in a given system is treated by the CCCP (current-coupling collision-probability) method. The system is partitioned into space elements which are coupled by currents. Inside the space elements first-flight probabilities are used to obtain the coefficients of the coupling equation and of the equations for the fluxes. The calculation of these coefficients is expensive in CPU time on two scores: the evaluation of the first-flight probabilities, and the matrix inversion to convert these probabilities into the desired coefficients. If the cross sections of two geometrically equal space elements, or of the same element at an earlier burnup level, differ less than a small fraction, considerable CPU time can be saved by using transformation laws. Previously, such laws were derived for first-flight probabilities; here, they are derived for the multiple-collision coefficients of the CCCP equations. They avoid not only the expensive calculations of the first-flight probabilities, but also the subsequent matrix inversion. Various examples illustrate the savings achieved by using these new transformation laws - or by directly using earlier calculated coefficients, if the cross section differences are negligible. (author)

  12. Conditional non-independence of radiographic image features and the derivation of post-test probabilities – A mammography BI-RADS example

    International Nuclear Information System (INIS)

    Benndorf, Matthias

    2012-01-01

    Bayes' theorem has proven to be one of the cornerstones in medical decision making. It allows for the derivation of post-test probabilities, which in case of a positive test result become positive predictive values. If several test results are observed successively Bayes' theorem may be used with assumed conditional independence of test results or with incorporated conditional dependencies. Herein it is examined whether radiographic image features should be considered conditionally independent diagnostic tests when post-test probabilities are to be derived. For this purpose the mammographic mass dataset from the UCI (University of California, Irvine) machine learning repository is analysed. It comprises the description of 961 (516 benign, 445 malignant) mammographic mass lesions according to the BI-RADS (Breast Imaging: Reporting and Data System) lexicon. Firstly, an exhaustive correlation matrix is presented for mammography BI-RADS features among benign and malignant lesions separately; correlation can be regarded as measure for conditional dependence. Secondly, it is shown that the derived positive predictive values for the conjunction of the two features “irregular shape” and “spiculated margin” differ significantly depending on whether conditional dependencies are incorporated into the decision process or not. It is concluded that radiographic image features should not generally be regarded as conditionally independent diagnostic tests.

  13. Conditional survival in patients with chronic myeloid leukemia in chronic phase in the era of tyrosine kinase inhibitors.

    Science.gov (United States)

    Sasaki, Koji; Kantarjian, Hagop M; Jain, Preetesh; Jabbour, Elias J; Ravandi, Farhad; Konopleva, Marina; Borthakur, Gautam; Takahashi, Koichi; Pemmaraju, Naveen; Daver, Naval; Pierce, Sherry A; O'Brien, Susan M; Cortes, Jorge E

    2016-01-15

    Tyrosine kinase inhibitors (TKIs) significantly improve survival in patients with chronic myeloid leukemia in chronic phase (CML-CP). Conditional probability provides survival information in patients who have already survived for a specific period of time after treatment. Cumulative response and survival data from 6 consecutive frontline TKI clinical trials were analyzed. Conditional probability was calculated for failure-free survival (FFS), transformation-free survival (TFS), event-free survival (EFS), and overall survival (OS) according to depth of response within 1 year of the initiation of TKIs, including complete cytogenetic response, major molecular response, and molecular response with a 4-log or 4.5-log reduction. A total of 483 patients with a median follow-up of 99.4 months from the initiation of treatment with TKIs were analyzed. Conditional probabilities of FFS, TFS, EFS, and OS for 1 additional year for patients alive after 12 months of therapy ranged from 92.0% to 99.1%, 98.5% to 100%, 96.2% to 99.6%, and 96.8% to 99.7%, respectively. Conditional FFS for 1 additional year did not improve with a deeper response each year. Conditional probabilities of TFS, EFS, and OS for 1 additional year were maintained at >95% during the period. In the era of TKIs, patients with chronic myeloid leukemia in chronic phase who survived for a certain number of years maintained excellent clinical outcomes in each age group. Cancer 2016;122:238-248. © 2015 American Cancer Society. © 2015 American Cancer Society.

  14. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    Science.gov (United States)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  15. The relevance of parametric U-uptake models in ESR age calculations

    International Nuclear Information System (INIS)

    Gruen, Rainer

    2009-01-01

    In the ESR dating three basic parametric U-uptake models have been applied for dating teeth: early U-uptake (EU: closed system), linear U-uptake (LU) and recent U-uptake (RU, it is assumed that the dose rate contribution of U in the dental tissues is zero). In many ESR dating publications it is still assumed that samples comply with one or the other parametric U-uptake model calculation or that their correct age lies somewhere between EU and LU. Observations of the spatial distribution of uranium in dental tissues show that it is difficult to predict any relationships between the relative uptake in the dental tissues. Combined U-series/ESR age estimates can give insights into the actual U-uptake. An evaluation of published data shows that for cave sites, a significant number of results fall outside the EU and LU bracket, while for open air sites, the majority of data are outside this bracket, particularly showing greatly delayed U-uptake. This may be due to changes in the hydrological system, leading to erosion which exposes the open air site. U-leaching has also been observed on samples from open air sites, in which case any reasonable age calculation is impossible.

  16. Pre-Service Teachers' Conceptions of Probability

    Science.gov (United States)

    Odafe, Victor U.

    2011-01-01

    Probability knowledge and skills are needed in science and in making daily decisions that are sometimes made under uncertain conditions. Hence, there is the need to ensure that the pre-service teachers of our children are well prepared to teach probability. Pre-service teachers' conceptions of probability are identified, and ways of helping them…

  17. Rainfall and net infiltration probabilities for future climate conditions at Yucca Mountain

    International Nuclear Information System (INIS)

    Long, A.; Childs, S.W.

    1993-01-01

    Performance assessment of repository integrity is a task rendered difficult because it requires predicting the future. This challenge has occupied many scientists who realize that the best assessments are required to maximize the probability of successful repository sitting and design. As part of a performance assessment effort directed by the EPRI, the authors have used probabilistic methods to assess the magnitude and timing of net infiltration at Yucca Mountain. A mathematical model for net infiltration previously published incorporated a probabilistic treatment of climate, surface hydrologic processes and a mathematical model of the infiltration process. In this paper, we present the details of the climatological analysis. The precipitation model is event-based, simulating characteristics of modern rainfall near Yucca Mountain, then extending the model to most likely values for different degrees of pluvial climates. Next the precipitation event model is fed into a process-based infiltration model that considers spatial variability in parameters relevant to net infiltration of Yucca Mountain. The model predicts that average annual net infiltration at Yucca Mountain will range from a mean of about 1 mm under present climatic conditions to a mean of at least 2.4 mm under full glacial (pluvial) conditions. Considerable variations about these means are expected to occur from year-to-year

  18. Approaches to Evaluating Probability of Collision Uncertainty

    Science.gov (United States)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  19. Evaluations of Structural Failure Probabilities and Candidate Inservice Inspection Programs

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.; Simonen, Fredric A.

    2009-05-01

    The work described in this report applies probabilistic structural mechanics models to predict the reliability of nuclear pressure boundary components. These same models are then applied to evaluate the effectiveness of alternative programs for inservice inspection to reduce these failure probabilities. Results of the calculations support the development and implementation of risk-informed inservice inspection of piping and vessels. Studies have specifically addressed the potential benefits of ultrasonic inspections to reduce failure probabilities associated with fatigue crack growth and stress-corrosion cracking. Parametric calculations were performed with the computer code pc-PRAISE to generate an extensive set of plots to cover a wide range of pipe wall thicknesses, cyclic operating stresses, and inspection strategies. The studies have also addressed critical inputs to fracture mechanics calculations such as the parameters that characterize the number and sizes of fabrication flaws in piping welds. Other calculations quantify uncertainties associated with the inputs calculations, the uncertainties in the fracture mechanics models, and the uncertainties in the resulting calculated failure probabilities. A final set of calculations address the effects of flaw sizing errors on the effectiveness of inservice inspection programs.

  20. Propensity, Probability, and Quantum Theory

    Science.gov (United States)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  1. Validation of AEGIS/SCOPE2 system through actual core follow calculations with irregular operational conditions

    International Nuclear Information System (INIS)

    Tabuchi, M.; Tatsumi, M.; Ohoka, Y.; Nagano, H.; Ishizaki, K.

    2017-01-01

    This paper describes overview of AEGIS/SCOPE2 system, an advanced in-core fuel management system for pressurized water reactors, and its validation results of actual core follow calculations including irregular operational conditions. AEGIS and SCOPE2 codes adopt more detailed and accurate calculation models compared to the current core design codes while computational cost is minimized with various techniques on numerical and computational algorithms. Verification and validation of AEGIS/SCOPE2 has been intensively performed to confirm validity of the system. As a part of the validation, core follow calculations have been carried out mainly for typical operational conditions. After the Fukushima Daiichi nuclear power plant accident, however, all the nuclear reactors in Japan suffered from long suspension and irregular operational conditions. In such situations, measured data in the restart and operation of the reactors should be good examinations for validation of the codes. Therefore, core follow calculations were carried out with AEGIS/SCOPE2 for various cases including zero power reactor physics tests with irregular operational conditions. Comparisons between measured data and predictions by AEGIS/SCOPE2 revealed the validity and robustness of the system. (author)

  2. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  3. Modeling spatial variability of sand-lenses in clay till settings using transition probability and multiple-point geostatistics

    DEFF Research Database (Denmark)

    Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik

    2010-01-01

    (TPROGS) of alternating geological facies. The second method, multiple-point statistics, uses training images to estimate the conditional probability of sand-lenses at a certain location. Both methods respect field observations such as local stratigraphy, however, only the multiple-point statistics can...... of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities...

  4. Evaluation of cable aging degradation based on plant operating condition

    International Nuclear Information System (INIS)

    Kim, Jong-Seog

    2005-01-01

    Extending the lifetime of nuclear power plant [(hereafter referred simply as ''NPP'')] is one of the most important concerns in the world nuclear industry. Cables are one of the long live items which have not been considered to be replaced during the design life of NPP. To extend the cable life beyond the design life, we need to prove that the design life is too conservative compared with the actual aging. Condition monitoring is one of the useful ways for evaluating the aging condition of cable. In order to simulate the natural aging in nuclear power plant, a study on accelerated aging needs to be conducted first. In this paper, evaluations of mechanical aging degradation for cable jacket were performed after accelerated aging under the continuous heating and intermittent heating. Contrary to general expectation, the intermittent heating to cable jacket showed low aging degradation, 50% break-elongation and 60% indenter modulus, compared with continuous heating. With the plant maintenance period of 1 month after every 12 or 18 months operation, we can easily deduce that the life time of cable jacket can be extended much longer than estimated through the general EQ (Environmental Qualification) test, which adopts continuous accelerated aging for determining cable life. Therefore, a systematic approach which considers the actual environment condition of nuclear power plant is required for determining the life of cables. (author)

  5. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com [Karadeniz Technical University, Trabzon (Turkey); Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr [Ağrı İbrahim Çeçen University, Ağrı (Turkey)

    2016-04-18

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  6. Solving probability reasoning based on DNA strand displacement and probability modules.

    Science.gov (United States)

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Evaluation of nuclear power plant component failure probability and core damage probability using simplified PSA model

    International Nuclear Information System (INIS)

    Shimada, Yoshio

    2000-01-01

    It is anticipated that the change of frequency of surveillance tests, preventive maintenance or parts replacement of safety related components may cause the change of component failure probability and result in the change of core damage probability. It is also anticipated that the change is different depending on the initiating event frequency or the component types. This study assessed the change of core damage probability using simplified PSA model capable of calculating core damage probability in a short time period, which is developed by the US NRC to process accident sequence precursors, when various component's failure probability is changed between 0 and 1, or Japanese or American initiating event frequency data are used. As a result of the analysis, (1) It was clarified that frequency of surveillance test, preventive maintenance or parts replacement of motor driven pumps (high pressure injection pumps, residual heat removal pumps, auxiliary feedwater pumps) should be carefully changed, since the core damage probability's change is large, when the base failure probability changes toward increasing direction. (2) Core damage probability change is insensitive to surveillance test frequency change, since the core damage probability change is small, when motor operated valves and turbine driven auxiliary feed water pump failure probability changes around one figure. (3) Core damage probability change is small, when Japanese failure probability data are applied to emergency diesel generator, even if failure probability changes one figure from the base value. On the other hand, when American failure probability data is applied, core damage probability increase is large, even if failure probability changes toward increasing direction. Therefore, when Japanese failure probability data is applied, core damage probability change is insensitive to surveillance tests frequency change etc. (author)

  8. Evaluation of probability and hazard in nuclear energy

    International Nuclear Information System (INIS)

    Novikov, V.Ya.; Romanov, N.L.

    1979-01-01

    Various methods of evaluation of accident probability on NPP are proposed because of NPP security statistic evaluation unreliability. The conception of subjective probability for quantitative analysis of security and hazard are described. Intrepretation of probability as real faith of an expert is assumed as a basis of the conception. It is suggested to study the event uncertainty in the framework of subjective probability theory which not only permits but demands to take into account expert opinions when evaluating the probability. These subjective expert evaluations effect to a certain extent the calculation of the usual mathematical event probability. The above technique is advantageous to use for consideration of a separate experiment or random event

  9. Collision probability method for discrete presentation of space in cylindrical cell

    International Nuclear Information System (INIS)

    Bosevski, T.

    1969-08-01

    A suitable numerical method for integration of one-group integral transport equation is obtained by series expansion of flux and neutron source by radius squared, when calculating the parameters of cylindrically symmetric reactor cell. Separation of variables in (x,y) plane enables analytical integration in one direction and efficient Gauss quadrature formula in the second direction. White boundary condition is used for determining the neutron balance. Suitable choice of spatial points distribution in the fuel and moderator condenses the procedure for determining the transport matrix and accelerates the convergence when calculating the absorption in the reactor cell. In comparison to other collision probability methods the proposed procedure is a simple mathematical model which demands smaller computer capacity and shorter computing time

  10. Probabilities the little numbers that rule our lives

    CERN Document Server

    Olofsson, Peter

    2014-01-01

    Praise for the First Edition"If there is anything you want to know, or remind yourself, about probabilities, then look no further than this comprehensive, yet wittily written and enjoyable, compendium of how to apply probability calculations in real-world situations."- Keith Devlin, Stanford University, National Public Radio's "Math Guy" and author of The Math Gene and The Unfinished GameFrom probable improbabilities to regular irregularities, Probabilities: The Little Numbers That Rule Our Lives, Second Edition investigates the often surprising effects of risk and chance in our lives. Featur

  11. Reduced N-acetylaspartate content in the frontal part of the brain in patients with probable Alzheimer's disease

    DEFF Research Database (Denmark)

    Christiansen, P; Schlosser, A; Henriksen, O

    1995-01-01

    The fully relaxed water signal was used as an internal standard in a STEAM experiment to calculate the concentrations of the metabolites: N-acetylaspartate (NAA), creatine + phosphocreatine [Cr + PCr], and choline-containing metabolites (Cho) in the frontal part of the brain in 12 patients...... with probable Alzheimer's disease. Eight age-matched healthy volunteers served as controls. Furthermore, T1 and T2 relaxation times of the metabolites and signal ratios: NAA/Cho, NAA/[Cr + PCr], and [Cr + PCr]/Cho at four different echo times (TE) and two different repetition times (TR) were calculated....... The experiments were carried out using a Siemens Helicon SP 63/84 wholebody MR-scanner at 1.5 T. The concentration of NAA was significantly lower in the patients with probable Alzheimer's disease than in the healthy volunteers. No significant difference was found for any other metabolite concentration...

  12. Using Fuzzy Probability Weights in Cumulative Prospect Theory

    Directory of Open Access Journals (Sweden)

    Užga-Rebrovs Oļegs

    2016-12-01

    Full Text Available During the past years, a rapid growth has been seen in the descriptive approaches to decision choice. As opposed to normative expected utility theory, these approaches are based on the subjective perception of probabilities by the individuals, which takes place in real situations of risky choice. The modelling of this kind of perceptions is made on the basis of probability weighting functions. In cumulative prospect theory, which is the focus of this paper, decision prospect outcome weights are calculated using the obtained probability weights. If the value functions are constructed in the sets of positive and negative outcomes, then, based on the outcome value evaluations and outcome decision weights, generalised evaluations of prospect value are calculated, which are the basis for choosing an optimal prospect.

  13. INLUX-DBR - A calculation code to calculate indoor natural illuminance inside buildings under various sky conditions

    International Nuclear Information System (INIS)

    Ferraro, V.; Igawa, N.; Marinelli, V.

    2010-01-01

    A calculation code, named INLUX-DBR, is presented, which is a modified version of INLUX code, able to predict the illuminance distribution on the inside surfaces of a room with six walls and a window, and on the work plane. At each desired instant the code solves the system of the illuminance equations of each surface element, characterized by the latter's reflection coefficient and its view factors toward the other elements. In the model implemented in the code, the sky-diffuse luminance distribution, the sun beam light and the light reflected from the ground toward the room are considered. The code was validated by comparing the calculated values of illuminance with the experimental values measured inside a scale model (1:5) of a building room, in various sky conditions of overcast, clear and intermediate days. The validation is performed using the sky luminance data measured by a sky scanner and the measured beam illuminance of the sun as input data. A comparative analysis of some of the well-known calculation models of sky luminance, namely Perez, Igawa and CIE models was also carried out, comparing the code predictions and the measured values of inside illuminance in the scale model.

  14. Reaction probability derived from an interpolation formula for diffusion processes with an absorptive boundary condition

    International Nuclear Information System (INIS)

    Misawa, T.; Itakura, H.

    1995-01-01

    The present article focuses on a dynamical simulation of molecular motion in liquids. In the simulation involving diffusion-controlled reaction with discrete time steps, lack of information regarding the trajectory within the time step may result in a failure to count the number of reactions of the particles within the step. In order to rectify this, an interpolated diffusion process is used. The process is derived from a stochastic interpolation formula recently developed by the first author [J. Math. Phys. 34, 775 (1993)]. In this method, the probability that reaction has occurred during the time step given the initial and final positions of the particles is calculated. Some numerical examples confirm that the theoretical result corresponds to an improvement over the Clifford-Green work [Mol. Phys. 57, 123 (1986)] on the same matter

  15. Probability theory and mathematical statistics for engineers

    CERN Document Server

    Pugachev, V S

    1984-01-01

    Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector

  16. Encounter Probability of Individual Wave Height

    DEFF Research Database (Denmark)

    Liu, Z.; Burcharth, H. F.

    1998-01-01

    wave height corresponding to a certain exceedence probability within a structure lifetime (encounter probability), based on the statistical analysis of long-term extreme significant wave height. Then the design individual wave height is calculated as the expected maximum individual wave height...... associated with the design significant wave height, with the assumption that the individual wave heights follow the Rayleigh distribution. However, the exceedence probability of such a design individual wave height within the structure lifetime is unknown. The paper presents a method for the determination...... of the design individual wave height corresponding to an exceedence probability within the structure lifetime, given the long-term extreme significant wave height. The method can also be applied for estimation of the number of relatively large waves for fatigue analysis of constructions....

  17. A basic course in probability theory

    CERN Document Server

    Bhattacharya, Rabi

    2016-01-01

    This text develops the necessary background in probability theory underlying diverse treatments of stochastic processes and their wide-ranging applications. In this second edition, the text has been reorganized for didactic purposes, new exercises have been added and basic theory has been expanded. General Markov dependent sequences and their convergence to equilibrium is the subject of an entirely new chapter. The introduction of conditional expectation and conditional probability very early in the text maintains the pedagogic innovation of the first edition; conditional expectation is illustrated in detail in the context of an expanded treatment of martingales, the Markov property, and the strong Markov property. Weak convergence of probabilities on metric spaces and Brownian motion are two topics to highlight. A selection of large deviation and/or concentration inequalities ranging from those of Chebyshev, Cramer–Chernoff, Bahadur–Rao, to Hoeffding have been added, with illustrative comparisons of thei...

  18. A probability space for quantum models

    Science.gov (United States)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  19. Incorporation of passive components aging into PRAs

    International Nuclear Information System (INIS)

    Phillips, J.H.; Roesener, W.S.; Magleby, H.L.; Geidl, V.

    1993-01-01

    The probabilistic risk assessments being developed at most nuclear power plants to calculate the risk of core damage generally focus on the possible failure of active components. The possible failure of passive components is given little consideration. We are developing a method for selecting risk-significant passive components and including them in probabilistic risk assessments. We demonstrated the method by selecting a weld in the auxiliary feedwater system. The selection of this component was based on expert judgement of the likelihood of failure and on an estimate of the consequence of component failure to plant safety. We then used the PRAISE computer code to perform a probabilistic structural analysis to calculate the probability that crack growth due to aging would cause the weld to fail. The calculation included the effects of mechanical loads and thermal transients considered in the design and the effects of thermal cycling caused by a leaking check valve. We modified an existing probabilistic risk assessment (NUREG-1150 plant) to include the possible failure of the auxiliary feedwater weld, and then we used the weld failure probability as input to the modified probabilistic risk assessment to calculate the change in plant risk with time. The results showed that if the failure probability of the selected weld is high, the effect on plant risk is significant. However, this particular calculation showed a very low weld failure probability and no change in plant risk for the 48 years of service analyzed. The success of this demonstration shows that this method could be applied to nuclear power plants. (orig.)

  20. Evaluation of DNA match probability in criminal case.

    Science.gov (United States)

    Lee, J W; Lee, H S; Park, M; Hwang, J J

    2001-02-15

    The new emphasis on quantification of evidence has led to perplexing courtroom decisions and it has been difficult for forensic scientists to pursue logical arguments. Especially, for evaluating DNA evidence, though both the genetic relationship for two compared persons and the examined locus system should be considered, the understanding for this has not yet drawn much attention. In this paper, we suggest to calculate the match probability by using coancestry coefficient when the family relationship is considered, and thus the performances of the identification values depending on the calculation of match probability are compared under various situations.

  1. Estimation of the probability of success in petroleum exploration

    Science.gov (United States)

    Davis, J.C.

    1977-01-01

    A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum

  2. Nitrogen oxide emission calculation for post-Panamax container ships by using engine operation power probability as weighting factor: A slow-steaming case.

    Science.gov (United States)

    Cheng, Chih-Wen; Hua, Jian; Hwang, Daw-Shang

    2017-12-07

    In this study, the nitrogen oxide (NO x ) emission factors and total NO x emissions of two groups of post-Panamax container ships operating on a long-term slow-steaming basis along Euro-Asian routes were calculated using both the probability density function of engine power levels and the NO x emission function. The main engines of the five sister ships in Group I satisfied the Tier I emission limit stipulated in MARPOL (International Convention for the Prevention of Pollution from Ships) Annex VI, and those in Group II satisfied the Tier II limit. The calculated NO x emission factors of the Group I and Group II ships were 14.73 and 17.85 g/kWhr, respectively. The total NO x emissions of the Group II ships were determined to be 4.4% greater than those of the Group I ships. When the Tier II certification value was used to calculate the average total NO x emissions of Group II engines, the result was lower than the actual value by 21.9%. Although fuel consumption and carbon dioxide (CO 2 ) emissions were increased by 1.76% because of slow steaming, the NO x emissions were markedly reduced by 17.2%. The proposed method is more effective and accurate than the NO x Technical Code 2008. Furthermore, it can be more appropriately applied to determine the NO x emissions of international shipping inventory. The usage of operating power probability density function of diesel engines as the weighting factor and the NO x emission function obtained from test bed for calculating NO x emissions is more accurate and practical. The proposed method is suitable for all types and purposes of diesel engines, irrespective of their operating power level. The method can be used to effectively determine the NO x emissions of international shipping and inventory applications and should be considered in determining the carbon tax to be imposed in the future.

  3. Failure probability of PWR reactor coolant loop piping

    International Nuclear Information System (INIS)

    Lo, T.; Woo, H.H.; Holman, G.S.; Chou, C.K.

    1984-02-01

    This paper describes the results of assessments performed on the PWR coolant loop piping of Westinghouse and Combustion Engineering plants. For direct double-ended guillotine break (DEGB), consideration was given to crack existence probability, initial crack size distribution, hydrostatic proof test, preservice inspection, leak detection probability, crack growth characteristics, and failure criteria based on the net section stress failure and tearing modulus stability concept. For indirect DEGB, fragilities of major component supports were estimated. The system level fragility was then calculated based on the Boolean expression involving these fragilities. Indirect DEGB due to seismic effects was calculated by convolving the system level fragility and the seismic hazard curve. The results indicate that the probability of occurrence of both direct and indirect DEGB is extremely small, thus, postulation of DEGB in design should be eliminated and replaced by more realistic criteria

  4. Probability of primordial black hole pair creation in a modified gravitational theory

    International Nuclear Information System (INIS)

    Paul, B. C.; Paul, Dilip

    2006-01-01

    We compute the probability for quantum creation of an inflationary universe with and without a pair of black holes in a modified gravity. The action of the modified theory of gravity contains αR 2 and δR -1 terms in addition to a cosmological constant (Λ) in the Einstein-Hilbert action. The probabilities for the creation of universe with a pair of black holes have been evaluated considering two different kinds of spatial sections, one which accommodates a pair of black holes and the other without black hole. We adopt a technique prescribed by Bousso and Hawking to calculate the above creation probability in a semiclassical approximation using the Hartle-Hawking boundary condition. We note a class of new and physically interesting instanton solutions characterized by the parameters in the action. These instantons may play an important role in the creation of the early universe. We also note that the probability of creation of a universe with a pair of black holes is strongly suppressed with a positive cosmological constant when δ=(4Λ 2 /3) for α>0 but it is more probable for α<-(1/6Λ). In the modified gravity considered here instanton solutions are permitted even without a cosmological constant when one begins with a negative δ

  5. Wire system aging assessment and condition monitoring (WASCO)

    International Nuclear Information System (INIS)

    Fantoni, P.F.

    2007-04-01

    Nuclear facilities rely on electrical wire systems to perform a variety of functions for successful operation. Many of these functions directly support the safe operation of the facility; therefore, the continued reliability of wire systems, even as they age, is critical. Condition Monitoring (CM) of installed wire systems is an important part of any aging program, both during the first 40 years of the qualified life and even more in anticipation of the license renewal for a nuclear power plant. This report contains some test results of a method for wire system condition monitoring, developed at the Halden Reactor Project, called LIRA (LIne Resonance Analysis), which can be used on-line to detect any local or global changes in the cable electrical parameters as a consequence of insulation faults or degradation. (au)

  6. The Dynamic contribution of chronic conditions to temporal trends in disability among U.S. adults.

    Science.gov (United States)

    Lin, Shih-Fan; Beck, Audrey N; Finch, Brian K

    2016-04-01

    Although evidence has shown that U.S. late-life disability has been declining, studies have also suggested that there has been an increase in chronic diseases between 1984 and 2007. To further illuminate these potentially contradictory trends, we explicate how the contribution of chronic conditions changes across four common types of disability (ADL, IADL, mobility disability, and functional limitations) by age (A), period (P), and birth cohorts (C) among adults aged 20 and above. Our data came from seven cross-sectional waves of the National Health and Nutrition Examination Survey (NHANES). We utilize a cross-classified random effect model (CCREM) to simultaneously estimate age, period, and cohort trends for each disability. Each chronic condition was sequentially then simultaneously added to our base models (sociodemographics only). Reductions in predicted probability from the base model were then calculated for each chronic condition by each temporal dimension (A/P/C) to assess the contribution of each chronic condition. There was increasing age-based contribution of chronic conditions to all disabilities. The period-based contribution remained quite stagnant across years while cohort-based contributions showed a continual decline for recent cohorts. Arthritis showed the greatest contribution to disability of all types which was followed by obesity. Cancer was the least important contributor to disabilities. Although chronic conditions are becoming less disabling across recent cohorts, other competing risk factors might suggest prevailing causes of disability. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Cladding failure probability modeling for risk evaluations of fast reactors

    International Nuclear Information System (INIS)

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current US innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery

  8. Handwriting, Visuomotor Integration, and Neurological Condition at School Age

    Science.gov (United States)

    Van Hoorn, Jessika F.; Maathuis, Carel G. B.; Peters, Lieke H. J.; Hadders-Algra, Mijna

    2010-01-01

    Aim: The study investigated the relationships between handwriting, visuomotor integration, and neurological condition. We paid particular attention to the presence of minor neurological dysfunction (MND). Method : Participants were 200 children (131 males, 69 females; age range 8-13y) of whom 118 received mainstream education (mean age 10y 5mo, SD…

  9. Transition probabilities between levels of K and K+

    International Nuclear Information System (INIS)

    Campos Gutierrez, J.; Martin Vicente, A.

    1984-01-01

    In this work transition probabilities between Ievels of n < 11 for K and for the known of K+ are calculated. Two computer programs based on the Coulomb approximation and the most suitable coupling schemes has been used. Lifetimes of all these levels are also calculated. (Author)

  10. INLUX-DBR - A calculation code to calculate indoor natural illuminance inside buildings under various sky conditions

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, V.; Igawa, N.; Marinelli, V. [Mechanical Engineering Department, University of Calabria, 87036 Arcavacata di Rende (CS) (Italy)

    2010-09-15

    A calculation code, named INLUX-DBR, is presented, which is a modified version of INLUX code, able to predict the illuminance distribution on the inside surfaces of a room with six walls and a window, and on the work plane. At each desired instant the code solves the system of the illuminance equations of each surface element, characterized by the latter's reflection coefficient and its view factors toward the other elements. In the model implemented in the code, the sky-diffuse luminance distribution, the sun beam light and the light reflected from the ground toward the room are considered. The code was validated by comparing the calculated values of illuminance with the experimental values measured inside a scale model (1:5) of a building room, in various sky conditions of overcast, clear and intermediate days. The validation is performed using the sky luminance data measured by a sky scanner and the measured beam illuminance of the sun as input data. A comparative analysis of some of the well-known calculation models of sky luminance, namely Perez, Igawa and CIE models was also carried out, comparing the code predictions and the measured values of inside illuminance in the scale model. (author)

  11. Calculation of local flow conditions in the lower core of a PWR with code-Saturne

    International Nuclear Information System (INIS)

    Fournier, Y.

    2003-01-01

    In order to better understand the stresses to which fuel rods are subjected, we need to improve our knowledge of the fluid flow inside the core. A code specialized for calculations in tube bundles is used to calculate the flow inside the whole of the core, with a resolution at the assembly level. Still, it is necessary to obtain realistic entry conditions, and these depend on the flow in the downcomer and lower plenum. Also, the flow in the first stages of the core features 4 incoming jets per assembly, and requires a resolution much finer than that used for the whole core calculation. A series of calculations are thus run with our incompressible Navier-Stokes solver, Code-Saturne, using a classical Ranse turbulence model. The first calculations involve a detailed geometry, including part of the cold legs, downcomer, lower plenum, and lower core of a pressurized water reactor. The level of detail includes most obstacles below the core. The lower core plate, being pierced with close to 800 holes, cannot be realistically represented within a practical mesh size, so that a head loss model is used. The lower core itself requiring even more detail is also represented with head losses. We make full use of Code-Saturne's non conforming mesh possibilities to represent a complex geometry, being careful to retain a good mesh quality. Starting just under the lower core, the mesh is aligned with fuel rod assemblies, so that different types of assemblies can be represented through different head loss coefficients. These calculations yield steady-state or near steady-state results, which are compared to experimental data, and should be sufficient to yield realistic entry conditions for full core calculations at assembly width resolution, and beyond those mechanical strain calculations. We are also interested in more detailed flow conditions and fluctuations in the lower core area, so as to better quantify vibrational input. This requires a much higher resolution, which we limit

  12. Monte Carlo simulation of the sequential probability ratio test for radiation monitoring

    International Nuclear Information System (INIS)

    Coop, K.L.

    1984-01-01

    A computer program simulates the Sequential Probability Ratio Test (SPRT) using Monte Carlo techniques. The program, SEQTEST, performs random-number sampling of either a Poisson or normal distribution to simulate radiation monitoring data. The results are in terms of the detection probabilities and the average time required for a trial. The computed SPRT results can be compared with tabulated single interval test (SIT) values to determine the better statistical test for particular monitoring applications. Use of the SPRT in a hand-and-foot alpha monitor shows that the SPRT provides better detection probabilities while generally requiring less counting time. Calculations are also performed for a monitor where the SPRT is not permitted to the take longer than the single interval test. Although the performance of the SPRT is degraded by this restriction, the detection probabilities are still similar to the SIT values, and average counting times are always less than 75% of the SIT time. Some optimal conditions for use of the SPRT are described. The SPRT should be the test of choice in many radiation monitoring situations. 6 references, 8 figures, 1 table

  13. Application of escape probability to line transfer in laser-produced plasmas

    International Nuclear Information System (INIS)

    Lee, Y.T.; London, R.A.; Zimmerman, G.B.; Haglestein, P.L.

    1989-01-01

    In this paper the authors apply the escape probability method to treat transfer of optically thick lines in laser-produced plasmas in plan-parallel geometry. They investigate the effect of self-absorption on the ionization balance and ion level populations. In addition, they calculate such effect on the laser gains in an exploding foil target heated by an optical laser. Due to the large ion streaming motion in laser-produced plasmas, absorption of an emitted photon occurs only over the length in which the Doppler shift is equal to the line width. They find that the escape probability calculated with the Doppler shift is larger compared to the escape probability for a static plasma. Therefore, the ion streaming motion contributes significantly to the line transfer process in laser-produced plasmas. As examples, they have applied escape probability to calculate transfer of optically thick lines in both ablating slab and exploding foil targets under irradiation of a high-power optical laser

  14. On the failure probability of the primary piping of the PWR

    International Nuclear Information System (INIS)

    Schueller, G.I.; Hampl, N.C.

    1984-01-01

    A methodology for quantification of the structural reliability of the primary piping (PP) of a PWR under operational and accidental conditions is developed. Biblis B is utilized as reference plant. The PP structure is modeled utilizing finite element procedures. Based on the properties of the operational and internal accidental conditions, a static analysis suffices. However, a dynamic analysis considering non-linear effects of the soil-structure-interaction is to be used to determine load effects due to earthquake induced loading. Considering realistically the presence of initial cracks in welds and considering annual frequencies of occurrence of the various loading conditions, a crack propagation calculation utilizing the Forman model is carried out. Simultaneously leak and break probabilities using the 'Two Criteria'-Aproach are computed. A Monte Carlo simulation procedure is used as method of solution. (Author) [pt

  15. The estimation of collision probabilities in complicated geometries

    International Nuclear Information System (INIS)

    Roth, M.J.

    1969-04-01

    This paper demonstrates how collision probabilities in complicated geometries may be estimated. It is assumed that the reactor core may be divided into a number of cells each with simple geometry so that a collision probability matrix can be calculated for each cell by standard methods. It is then shown how these may be joined together. (author)

  16. Transformer ageing modern condition monitoring techniques and their interpretations

    CERN Document Server

    Purkait, Prithwiraj

    2017-01-01

    This book is a one-stop guide to state-of-the-art research in transformer ageing, condition monitoring and diagnosis. It is backed by rigorous research projects supported by the Australian Research Council in collaboration with several transmission and distribution companies. Many of the diagnostic techniques and tools developed in these projects have been applied by electricity utilities and would appeal to both researchers and practicing engineers. Important topics covered in this book include transformer insulation materials and their ageing behaviour, transformer condition monitoring techniques and detailed diagnostic techniques and their interpretation schemes. It also features a monitoring framework for smart transformers as well as a chapter on biodegradable oil.

  17. The extinction probability in systems randomly varying in time

    Directory of Open Access Journals (Sweden)

    Imre Pázsit

    2017-09-01

    Full Text Available The extinction probability of a branching process (a neutron chain in a multiplying medium is calculated for a system randomly varying in time. The evolution of the first two moments of such a process was calculated previously by the authors in a system randomly shifting between two states of different multiplication properties. The same model is used here for the investigation of the extinction probability. It is seen that the determination of the extinction probability is significantly more complicated than that of the moments, and it can only be achieved by pure numerical methods. The numerical results indicate that for systems fluctuating between two subcritical or two supercritical states, the extinction probability behaves as expected, but for systems fluctuating between a supercritical and a subcritical state, there is a crucial and unexpected deviation from the predicted behaviour. The results bear some significance not only for neutron chains in a multiplying medium, but also for the evolution of biological populations in a time-varying environment.

  18. A developmental study of risky decisions on the cake gambling task: age and gender analyses of probability estimation and reward evaluation.

    Science.gov (United States)

    Van Leijenhorst, Linda; Westenberg, P Michiel; Crone, Eveline A

    2008-01-01

    Decision making, or the process of choosing between competing courses of actions, is highly sensitive to age-related change, showing development throughout adolescence. In this study, we tested whether the development of decision making under risk is related to changes in risk-estimation abilities. Participants (N = 93) between ages 8-30 performed a child friendly gambling task, the Cake Gambling task, which was inspired by the Cambridge Gambling Task (Rogers et al., 1999), which has previously been shown to be sensitive to orbitofrontal cortex (OFC) damage. The task allowed comparisons of the contributions to risk perception of (1) the ability to estimate probabilities and (2) evaluate rewards. Adult performance patterns were highly similar to those found in previous reports, showing increased risk taking with increases in the probability of winning and the magnitude of potential reward. Behavioral patterns in children and adolescents did not differ from adult patterns, showing a similar ability for probability estimation and reward evaluation. These data suggest that participants 8 years and older perform like adults in a gambling task, previously shown to depend on the OFC in which all the information needed to make an advantageous decision is given on each trial and no information needs to be inferred from previous behavior. Interestingly, at all ages, females were more risk-averse than males. These results suggest that the increase in real-life risky behavior that is seen in adolescence is not a consequence of changes in risk perception abilities. The findings are discussed in relation to theories about the protracted development of the prefrontal cortex.

  19. Transition probabilities of some Si II lines obtained by laser produced plasma emission

    International Nuclear Information System (INIS)

    Blanco, F.; Botho, B.; Campos, J.

    1995-01-01

    The absolute transition probabilities for 28 Si II spectral lines have been determined by measurement of emission line intensities from laser-produced plasmas of Si in Ar and Kr atmospheres. The studied plasma has a temperature of about 2 . 10 4 K and 10 17 cm -3 electron density. The local thermodynamic equilibrium conditions and plasma homogeneity have been checked. The results are compared with the available experimental and theoretical data and with present Hartree-Fock calculations in LS coupling. (orig.)

  20. Neighborhood Conditions and Psychosocial Outcomes Among Middle-Aged African Americans.

    Science.gov (United States)

    Tabet, Maya; Sanders, Erin A; Schootman, Mario; Chang, Jen Jen; Wolinsky, Fredric D; Malmstrom, Theodore K; Miller, Douglas K

    2017-04-01

    We examined associations between observed neighborhood conditions (good/adverse) and psychosocial outcomes (stress, depressive symptoms, resilience, and sense of control) among middle-aged and older African Americans. The sample included 455 middle-aged and older African Americans examined in Wave 10 of the African American Health (AAH) study. Linear regression was adjusted for attrition, self-selection into neighborhoods, and potential confounders, and stratified by the duration at current address (stress (standardized β = -0.18; P = .002) and depressive symptoms (standardized β = -0.12; P = .048). Among those who lived at their current address for stress (standardized β = 0.18; P = .305) or depressive symptoms (standardized β = 0.36; P = .080). Neighborhood conditions appear to have significant, complex associations with psychosocial factors among middle-aged and older African Americans. This holds important policy implications, especially since adverse neighborhood conditions may still result in adverse physical health outcomes in individuals with >5 years at current residence despite being associated with better psychosocial outcomes.

  1. The probability of containment failure by direct containment heating in zion

    International Nuclear Information System (INIS)

    Pilch, M.M.; Yan, H.; Theofanous, T.G.

    1994-01-01

    This report is the first step in the resolution of the Direct Containment Heating (DCH) issue for the Zion Nuclear Power Plant using the Risk Oriented Accident Analysis Methodology (ROAAM). This report includes the definition of a probabilistic framework that decomposes the DCH problem into three probability density functions that reflect the most uncertain initial conditions (UO 2 mass, zirconium oxidation fraction, and steel mass). Uncertainties in the initial conditions are significant, but the quantification approach is based on establishing reasonable bounds that are not unnecessarily conservative. To this end, the authors also make use of the ROAAM ideas of enveloping scenarios and open-quotes splinteringclose quotes. Two casual relations (CRs) are used in this framework: CR1 is a model that calculates the peak pressure in the containment as a function of the initial conditions, and CR2 is a model that returns the frequency of containment failure as a function of pressure within the containment. Uncertainty in CR1 is accounted for by the use of two independently developed phenomenological models, the Convection Limited Containment Heating (CLCH) model and the Two-Cell Equilibrium (TCE) model, and by probabilistically distributing the key parameter in both, which is the ratio of the melt entrainment time to the system blowdown time constant. The two phenomenological models have been compared with an extensive data base including recent integral simulations at two different physical scales (1/10th scale in the Surtsey facility at Sandia National Laboratories and 1/40th scale in the COREXIT facility at Argonne National Laboratory). The loads predicted by these models were significantly lower than those from previous parametric calculations. The containment load distributions do not intersect the containment strength curve in any significant way, resulting in containment failure probabilities less than 10 -3 for all scenarios considered

  2. Elements of probability and statistics an introduction to probability with De Finetti’s approach and to Bayesian statistics

    CERN Document Server

    Biagini, Francesca

    2016-01-01

    This book provides an introduction to elementary probability and to Bayesian statistics using de Finetti's subjectivist approach. One of the features of this approach is that it does not require the introduction of sample space – a non-intrinsic concept that makes the treatment of elementary probability unnecessarily complicate – but introduces as fundamental the concept of random numbers directly related to their interpretation in applications. Events become a particular case of random numbers and probability a particular case of expectation when it is applied to events. The subjective evaluation of expectation and of conditional expectation is based on an economic choice of an acceptable bet or penalty. The properties of expectation and conditional expectation are derived by applying a coherence criterion that the evaluation has to follow. The book is suitable for all introductory courses in probability and statistics for students in Mathematics, Informatics, Engineering, and Physics.

  3. Nuclear data uncertainties: I, Basic concepts of probability

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.

    1988-12-01

    Some basic concepts of probability theory are presented from a nuclear-data perspective, in order to provide a foundation for thorough understanding of the role of uncertainties in nuclear data research. Topics included in this report are: events, event spaces, calculus of events, randomness, random variables, random-variable distributions, intuitive and axiomatic probability, calculus of probability, conditional probability and independence, probability distributions, binomial and multinomial probability, Poisson and interval probability, normal probability, the relationships existing between these probability laws, and Bayes' theorem. This treatment emphasizes the practical application of basic mathematical concepts to nuclear data research, and it includes numerous simple examples. 34 refs.

  4. Nuclear data uncertainties: I, Basic concepts of probability

    International Nuclear Information System (INIS)

    Smith, D.L.

    1988-12-01

    Some basic concepts of probability theory are presented from a nuclear-data perspective, in order to provide a foundation for thorough understanding of the role of uncertainties in nuclear data research. Topics included in this report are: events, event spaces, calculus of events, randomness, random variables, random-variable distributions, intuitive and axiomatic probability, calculus of probability, conditional probability and independence, probability distributions, binomial and multinomial probability, Poisson and interval probability, normal probability, the relationships existing between these probability laws, and Bayes' theorem. This treatment emphasizes the practical application of basic mathematical concepts to nuclear data research, and it includes numerous simple examples. 34 refs

  5. An analytical calculation of neighbourhood order probabilities for high dimensional Poissonian processes and mean field models

    International Nuclear Information System (INIS)

    Tercariol, Cesar Augusto Sangaletti; Kiipper, Felipe de Moura; Martinez, Alexandre Souto

    2007-01-01

    Consider that the coordinates of N points are randomly generated along the edges of a d-dimensional hypercube (random point problem). The probability P (d,N) m,n that an arbitrary point is the mth nearest neighbour to its own nth nearest neighbour (Cox probabilities) plays an important role in spatial statistics. Also, it has been useful in the description of physical processes in disordered media. Here we propose a simpler derivation of Cox probabilities, where we stress the role played by the system dimensionality d. In the limit d → ∞, the distances between pair of points become independent (random link model) and closed analytical forms for the neighbourhood probabilities are obtained both for the thermodynamic limit and finite-size system. Breaking the distance symmetry constraint drives us to the random map model, for which the Cox probabilities are obtained for two cases: whether a point is its own nearest neighbour or not

  6. Estimating the empirical probability of submarine landslide occurrence

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  7. Sound Propagation Around Off-Shore Wind Turbines. Long-Range Parabolic Equation Calculations for Baltic Sea Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, Lisa

    2003-07-01

    Low-frequency, long-range sound propagation over a sea surface has been calculated using a wide-angel Cranck-Nicholson Parabolic Equation method. The model is developed to investigate noise from off-shore wind turbines. The calculations are made using normal meteorological conditions of the Baltic Sea. Special consideration has been made to a wind phenomenon called low level jet with strong winds on rather low altitude. The effects of water waves on sound propagation have been incorporated in the ground boundary condition using a boss model. This way of including roughness in sound propagation models is valid for water wave heights that are small compared to the wave length of the sound. Nevertheless, since only low frequency sound is considered, waves up to the mean wave height of the Baltic Sea can be included in this manner. The calculation model has been tested against benchmark cases and agrees well with measurements. The calculations show that channelling of sound occurs at downwind conditions and that the sound propagation tends towards cylindrical spreading. The effects of the water waves are found to be fairly small.

  8. Internal Medicine residents use heuristics to estimate disease probability.

    Science.gov (United States)

    Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin

    2015-01-01

    Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing.

  9. The General Necessary Condition for the Validity of Dirac's Transition Perturbation Theory

    Science.gov (United States)

    Quang, Nguyen Vinh

    1996-01-01

    For the first time, from the natural requirements for the successive approximation the general necessary condition of validity of the Dirac's method is explicitly established. It is proved that the conception of 'the transition probability per unit time' is not valid. The 'super-platinium rules' for calculating the transition probability are derived for the arbitrarily strong time-independent perturbation case.

  10. Fixation probability on clique-based graphs

    Science.gov (United States)

    Choi, Jeong-Ok; Yu, Unjong

    2018-02-01

    The fixation probability of a mutant in the evolutionary dynamics of Moran process is calculated by the Monte-Carlo method on a few families of clique-based graphs. It is shown that the complete suppression of fixation can be realized with the generalized clique-wheel graph in the limit of small wheel-clique ratio and infinite size. The family of clique-star is an amplifier, and clique-arms graph changes from amplifier to suppressor as the fitness of the mutant increases. We demonstrate that the overall structure of a graph can be more important to determine the fixation probability than the degree or the heat heterogeneity. The dependence of the fixation probability on the position of the first mutant is discussed.

  11. Probability: A Matter of Life and Death

    Science.gov (United States)

    Hassani, Mehdi; Kippen, Rebecca; Mills, Terence

    2016-01-01

    Life tables are mathematical tables that document probabilities of dying and life expectancies at different ages in a society. Thus, the life table contains some essential features of the health of a population. Probability is often regarded as a difficult branch of mathematics. Life tables provide an interesting approach to introducing concepts…

  12. Fixation Probabilities of Evolutionary Graphs Based on the Positions of New Appearing Mutants

    Directory of Open Access Journals (Sweden)

    Pei-ai Zhang

    2014-01-01

    Full Text Available Evolutionary graph theory is a nice measure to implement evolutionary dynamics on spatial structures of populations. To calculate the fixation probability is usually regarded as a Markov chain process, which is affected by the number of the individuals, the fitness of the mutant, the game strategy, and the structure of the population. However the position of the new mutant is important to its fixation probability. Here the position of the new mutant is laid emphasis on. The method is put forward to calculate the fixation probability of an evolutionary graph (EG of single level. Then for a class of bilevel EGs, their fixation probabilities are calculated and some propositions are discussed. The conclusion is obtained showing that the bilevel EG is more stable than the corresponding one-rooted EG.

  13. Bayesian maximum posterior probability method for interpreting plutonium urinalysis data

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.

    1996-01-01

    A new internal dosimetry code for interpreting urinalysis data in terms of radionuclide intakes is described for the case of plutonium. The mathematical method is to maximise the Bayesian posterior probability using an entropy function as the prior probability distribution. A software package (MEMSYS) developed for image reconstruction is used. Some advantages of the new code are that it ensures positive calculated dose, it smooths out fluctuating data, and it provides an estimate of the propagated uncertainty in the calculated doses. (author)

  14. The semi-Markov process. Generalizations and calculation rules for application in the analysis of systems

    International Nuclear Information System (INIS)

    Hirschmann, H.

    1983-06-01

    The consequences of the basic assumptions of the semi-Markov process as defined from a homogeneous renewal process with a stationary Markov condition are reviewed. The notion of the semi-Markov process is generalized by its redefinition from a nonstationary Markov renewal process. For both the nongeneralized and the generalized case a representation of the first order conditional state probabilities is derived in terms of the transition probabilities of the Markov renewal process. Some useful calculation rules (regeneration rules) are derived for the conditional state probabilities of the semi-Markov process. Compared to the semi-Markov process in its usual definition the generalized process allows the analysis of a larger class of systems. For instance systems with arbitrarily distributed lifetimes of their components can be described. There is also a chance to describe systems which are modified during time by forces or manipulations from outside. (Auth.)

  15. Methodology of external exposure calculation for reuse of conditional released materials from decommissioning - 59138

    International Nuclear Information System (INIS)

    Ondra, Frantisek; Vasko, Marek; Necas, Vladimir

    2012-01-01

    The article presents methodology of external exposure calculation for reuse of conditional released materials from decommissioning using VISIPLAN 3D ALARA planning tool. Production of rails has been used as an example application of proposed methodology within the CONRELMAT project. The article presents a methodology for determination of radiological, material, organizational and other conditions for conditionally released materials reuse to ensure that workers and public exposure does not breach the exposure limits during scenario's life cycle (preparation, construction and operation of scenario). The methodology comprises a proposal of following conditions in the view of workers and public exposure: - radionuclide limit concentration of conditionally released materials for specific scenarios and nuclide vectors, - specific deployment of conditionally released materials eventually shielding materials, workers and public during the scenario's life cycle, - organizational measures concerning time of workers or public stay in the vicinity on conditionally released materials for individual performed scenarios and nuclide vectors. The above mentioned steps of proposed methodology have been applied within the CONRELMAT project. Exposure evaluation of workers for rail production is introduced in the article as an example of this application. Exposure calculation using VISIPLAN 3D ALARA planning tool was done within several models. The most exposed profession for scenario was identified. On the basis of this result, an increase of radionuclide concentration in conditional released material was proposed more than two times to 681 Bq/kg without no additional safety or organizational measures being applied. After application of proposed safety and organizational measures (additional shielding, geometry changes and limitation of work duration) it is possible to increase concentration of radionuclide in conditional released material more than ten times to 3092 Bq/kg. Storage

  16. Paraconsistent Probabilities: Consistency, Contradictions and Bayes’ Theorem

    Directory of Open Access Journals (Sweden)

    Juliana Bueno-Soler

    2016-09-01

    Full Text Available This paper represents the first steps towards constructing a paraconsistent theory of probability based on the Logics of Formal Inconsistency (LFIs. We show that LFIs encode very naturally an extension of the notion of probability able to express sophisticated probabilistic reasoning under contradictions employing appropriate notions of conditional probability and paraconsistent updating, via a version of Bayes’ theorem for conditionalization. We argue that the dissimilarity between the notions of inconsistency and contradiction, one of the pillars of LFIs, plays a central role in our extended notion of probability. Some critical historical and conceptual points about probability theory are also reviewed.

  17. Cladding failure probability modeling for risk evaluations of fast reactors

    International Nuclear Information System (INIS)

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current U.S. innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery. (orig.)

  18. Application of uncertainty analysis method for calculations of accident conditions for RP AES-2006

    International Nuclear Information System (INIS)

    Zajtsev, S.I.; Bykov, M.A.; Zakutaev, M.O.; Siryapin, V.N.; Petkevich, I.G.; Siryapin, N.V.; Borisov, S.L.; Kozlachkov, A.N.

    2015-01-01

    An analysis of some accidents using the uncertainly assessment methods is given. The list of the variable parameters incorporated the model parameters of the computer codes, initial and boundary conditions of reactor plant, neutronics. On the basis of the performed calculations of the accident conditions using the statistical method, errors assessment is presented in the determination of the main parameters comparable with the acceptance criteria. It was shown that in the investigated accidents the values of the calculated parameters with account for their error obtained from TRAP-KS and KORSAR/GP Codes do not exceed the established acceptance criteria. Besides, these values do not exceed the values obtained in the conservative calculations. A possibility in principle of the actual application of the method of estimation of uncertainty was shown to justify the safety of WWER AES-2006 using the thermal-physical codes KORSAR/GP and TRAP-KS, PANDA and SUSA programs [ru

  19. Heterogeneous Calculation of {epsilon}

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Alf

    1961-02-15

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of {epsilon}. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer.

  20. Heterogeneous Calculation of ε

    International Nuclear Information System (INIS)

    Jonsson, Alf

    1961-02-01

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of ε. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer

  1. Follow-up: Prospective compound design using the ‘SAR Matrix’ method and matrix-derived conditional probabilities of activity [v1; ref status: indexed, http://f1000r.es/56v

    Directory of Open Access Journals (Sweden)

    Disha Gupta-Ostermann

    2015-03-01

    Full Text Available In a previous Method Article, we have presented the ‘Structure-Activity Relationship (SAR Matrix’ (SARM approach. The SARM methodology is designed to systematically extract structurally related compound series from screening or chemical optimization data and organize these series and associated SAR information in matrices reminiscent of R-group tables. SARM calculations also yield many virtual candidate compounds that form a “chemical space envelope” around related series. To further extend the SARM approach, different methods are developed to predict the activity of virtual compounds. In this follow-up contribution, we describe an activity prediction method that derives conditional probabilities of activity from SARMs and report representative results of first prospective applications of this approach.

  2. Follow-up: Prospective compound design using the ‘SAR Matrix’ method and matrix-derived conditional probabilities of activity [v2; ref status: indexed, http://f1000r.es/59v

    Directory of Open Access Journals (Sweden)

    Disha Gupta-Ostermann

    2015-04-01

    Full Text Available In a previous Method Article, we have presented the ‘Structure-Activity Relationship (SAR Matrix’ (SARM approach. The SARM methodology is designed to systematically extract structurally related compound series from screening or chemical optimization data and organize these series and associated SAR information in matrices reminiscent of R-group tables. SARM calculations also yield many virtual candidate compounds that form a “chemical space envelope” around related series. To further extend the SARM approach, different methods are developed to predict the activity of virtual compounds. In this follow-up contribution, we describe an activity prediction method that derives conditional probabilities of activity from SARMs and report representative results of first prospective applications of this approach.

  3. Impact of proof test interval and coverage on probability of failure of safety instrumented function

    International Nuclear Information System (INIS)

    Jin, Jianghong; Pang, Lei; Hu, Bin; Wang, Xiaodong

    2016-01-01

    Highlights: • Introduction of proof test coverage makes the calculation of the probability of failure for SIF more accurate. • The probability of failure undetected by proof test is independently defined as P TIF and calculated. • P TIF is quantified using reliability block diagram and simple formula of PFD avg . • Improving proof test coverage and adopting reasonable test period can reduce the probability of failure for SIF. - Abstract: Imperfection of proof test can result in the safety function failure of safety instrumented system (SIS) at any time in its life period. IEC61508 and other references ignored or only elementarily analyzed the imperfection of proof test. In order to further study the impact of the imperfection of proof test on the probability of failure for safety instrumented function (SIF), the necessity of proof test and influence of its imperfection on system performance was first analyzed theoretically. The probability of failure for safety instrumented function resulted from the imperfection of proof test was defined as probability of test independent failures (P TIF ), and P TIF was separately calculated by introducing proof test coverage and adopting reliability block diagram, with reference to the simplified calculation formula of average probability of failure on demand (PFD avg ). Research results show that: the shorter proof test period and the higher proof test coverage indicate the smaller probability of failure for safety instrumented function. The probability of failure for safety instrumented function which is calculated by introducing proof test coverage will be more accurate.

  4. Calculation of individual and population doses on Danish territory resulting from hypothetical core-melt accidents at the Barsebaeck reactor

    International Nuclear Information System (INIS)

    1977-01-01

    Individual and population doses within Danish territory are calculated from hypothetical, severe core-melt accidents at the Swedish nuclear plant at Barsebaeck. The fission product inventory of the Barsebaeck reactor is calculated. The release fractions for the accidents are taken from WASH-1400. Based on parametric studies, doses are calculated for very unfavourable, but not incredible weather conditions. The probability of such conditions in combination with wind direction towards Danish territory is estimated. Doses to bone marrow, lungs, GI-tract and thyroid are calculated based on dose models developed at Risoe. These doses are found to be consistent with doses calculated with the models used in WASH-1400. (author)

  5. Probability concepts in quality risk management.

    Science.gov (United States)

    Claycamp, H Gregg

    2012-01-01

    Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although risk is generally a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management tools are relatively silent on the meaning and uses of "probability." The probability concept is typically applied by risk managers as a combination of frequency-based calculation and a "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management. A rich history of probability in risk management applied to other fields suggests that high-quality risk management decisions benefit from the implementation of more thoughtful probability concepts in both risk modeling and risk management. Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although "risk" generally describes a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management methodologies and respective tools focus on managing severity but are relatively silent on the in-depth meaning and uses of "probability." Pharmaceutical manufacturers are expanding their use of quality risk management to identify and manage risks to the patient that might occur in phases of the pharmaceutical life cycle from drug development to manufacture, marketing to product discontinuation. A probability concept is typically applied by risk managers as a combination of data-based measures of probability and a subjective "degree of belief" meaning of probability. Probability as

  6. A note on the transition probability over Csup(*)-algebras

    International Nuclear Information System (INIS)

    Alberti, P.M.; Karl-Marx-Universitaet, Leipzig

    1983-01-01

    The algebraic structure of Uhlmann's transition probability between mixed states on unital Csup(*)-algebras is analyzed. Several improvements of methods to calculate the transition probability are fixed, examples are given (e.g., the case of quasi-local Csup(*)-algebras is dealt with) and two more functional characterizations are proved in general. (orig.)

  7. Men who work at age 70 or older.

    Science.gov (United States)

    Ozawa, Martha N; Lum, Terry Y

    2005-01-01

    The federal policy on older workers has shifted from the encouragement of early withdrawal from the labor force to the encouragement of continuous participation in the labor force. In this light, it is instructive to investigate the backgrounds of elderly people who work at age 70 or older. This article presents the findings of a study, using data from the 1993 Asset and Health Dynamics of the Oldest Old Study, that investigated the effects of health, economic conditions (net worth, employer-provided pensions, and supplemental medical insurance coverage), education, and spouse's work status on the probability of working among men aged 70 or older. The study addressed the probability of working, the probability of working fulltime and of working part-time, and the probability of being self-employed and of being employed by others. Implications for policy are discussed.

  8. Invariant probabilities of transition functions

    CERN Document Server

    Zaharopol, Radu

    2014-01-01

    The structure of the set of all the invariant probabilities and the structure of various types of individual invariant probabilities of a transition function are two topics of significant interest in the theory of transition functions, and are studied in this book. The results obtained are useful in ergodic theory and the theory of dynamical systems, which, in turn, can be applied in various other areas (like number theory). They are illustrated using transition functions defined by flows, semiflows, and one-parameter convolution semigroups of probability measures. In this book, all results on transition probabilities that have been published by the author between 2004 and 2008 are extended to transition functions. The proofs of the results obtained are new. For transition functions that satisfy very general conditions the book describes an ergodic decomposition that provides relevant information on the structure of the corresponding set of invariant probabilities. Ergodic decomposition means a splitting of t...

  9. Conditional net survival: Relevant prognostic information for colorectal cancer survivors. A French population-based study.

    Science.gov (United States)

    Drouillard, Antoine; Bouvier, Anne-Marie; Rollot, Fabien; Faivre, Jean; Jooste, Valérie; Lepage, Côme

    2015-07-01

    Traditionally, survival estimates have been reported as survival from the time of diagnosis. A patient's probability of survival changes according to time elapsed since the diagnosis and this is known as conditional survival. The aim was to estimate 5-year net conditional survival in patients with colorectal cancer in a well-defined French population at yearly intervals up to 5 years. Our study included 18,300 colorectal cancers diagnosed between 1976 and 2008 and registered in the population-based digestive cancer registry of Burgundy (France). We calculated conditional 5-year net survival, using the Pohar Perme estimator, for every additional year survived after diagnosis from 1 to 5 years. The initial 5-year net survival estimates varied between 89% for stage I and 9% for advanced stage cancer. The corresponding 5-year net survival for patients alive after 5 years was 95% and 75%. Stage II and III patients who survived 5 years had a similar probability of surviving 5 more years, respectively 87% and 84%. For survivors after the first year following diagnosis, five-year conditional net survival was similar regardless of age class and period of diagnosis. For colorectal cancer survivors, conditional net survival provides relevant and complementary prognostic information for patients and clinicians. Copyright © 2015 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  10. Probability of Alzheimer's disease in breast cancer survivors based on gray-matter structural network efficiency.

    Science.gov (United States)

    Kesler, Shelli R; Rao, Vikram; Ray, William J; Rao, Arvind

    2017-01-01

    Breast cancer chemotherapy is associated with accelerated aging and potentially increased risk for Alzheimer's disease (AD). We calculated the probability of AD diagnosis from brain network and demographic and genetic data obtained from 47 female AD converters and 47 matched healthy controls. We then applied this algorithm to data from 78 breast cancer survivors. The classifier discriminated between AD and healthy controls with 86% accuracy ( P  < .0001). Chemotherapy-treated breast cancer survivors demonstrated significantly higher probability of AD compared to healthy controls ( P  < .0001) and chemotherapy-naïve survivors ( P  = .007), even after stratifying for apolipoprotein e4 genotype. Chemotherapy-naïve survivors also showed higher AD probability compared to healthy controls ( P  = .014). Chemotherapy-treated breast cancer survivors who have a particular profile of brain structure may have a higher risk for AD, especially those who are older and have lower cognitive reserve.

  11. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  12. Earthquake Probability Assessment for the Active Faults in Central Taiwan: A Case Study

    Directory of Open Access Journals (Sweden)

    Yi-Rui Lee

    2016-06-01

    Full Text Available Frequent high seismic activities occur in Taiwan due to fast plate motions. According to the historical records the most destructive earthquakes in Taiwan were caused mainly by inland active faults. The Central Geological Survey (CGS of Taiwan has published active fault maps in Taiwan since 1998. There are 33 active faults noted in the 2012 active fault map. After the Chi-Chi earthquake, CGS launched a series of projects to investigate the details to better understand each active fault in Taiwan. This article collected this data to develop active fault parameters and referred to certain experiences from Japan and the United States to establish a methodology for earthquake probability assessment via active faults. We consider the active faults in Central Taiwan as a good example to present the earthquake probability assessment process and results. The appropriate “probability model” was used to estimate the conditional probability where M ≥ 6.5 and M ≥ 7.0 earthquakes. Our result shows that the highest earthquake probability for M ≥ 6.5 earthquake occurring in 30, 50, and 100 years in Central Taiwan is the Tachia-Changhua fault system. Conversely, the lowest earthquake probability is the Chelungpu fault. The goal of our research is to calculate the earthquake probability of the 33 active faults in Taiwan. The active fault parameters are important information that can be applied in the following seismic hazard analysis and seismic simulation.

  13. Human Error Probability Assessment During Maintenance Activities of Marine Systems

    Directory of Open Access Journals (Sweden)

    Rabiul Islam

    2018-03-01

    Full Text Available Background: Maintenance operations on-board ships are highly demanding. Maintenance operations are intensive activities requiring high man–machine interactions in challenging and evolving conditions. The evolving conditions are weather conditions, workplace temperature, ship motion, noise and vibration, and workload and stress. For example, extreme weather condition affects seafarers' performance, increasing the chances of error, and, consequently, can cause injuries or fatalities to personnel. An effective human error probability model is required to better manage maintenance on-board ships. The developed model would assist in developing and maintaining effective risk management protocols. Thus, the objective of this study is to develop a human error probability model considering various internal and external factors affecting seafarers' performance. Methods: The human error probability model is developed using probability theory applied to Bayesian network. The model is tested using the data received through the developed questionnaire survey of >200 experienced seafarers with >5 years of experience. The model developed in this study is used to find out the reliability of human performance on particular maintenance activities. Results: The developed methodology is tested on the maintenance of marine engine's cooling water pump for engine department and anchor windlass for deck department. In the considered case studies, human error probabilities are estimated in various scenarios and the results are compared between the scenarios and the different seafarer categories. The results of the case studies for both departments are also compared. Conclusion: The developed model is effective in assessing human error probabilities. These probabilities would get dynamically updated as and when new information is available on changes in either internal (i.e., training, experience, and fatigue or external (i.e., environmental and operational conditions

  14. Continuous-energy adjoint flux and perturbation calculation using the iterated fission probability method in Monte-Carlo code TRIPOLI-4 and underlying applications

    International Nuclear Information System (INIS)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.

    2013-01-01

    The first goal of this paper is to present an exact method able to precisely evaluate very small reactivity effects with a Monte Carlo code (<10 pcm). it has been decided to implement the exact perturbation theory in TRIPOLI-4 and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4 is described. To illustrate the efficiency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the 'direct' estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the 'direct' method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. It offers the possibility to split reactivity contributions on both isotopes and reactions. Other applications of this perturbation method are presented and tested like the calculation of exact kinetic parameters (βeff, Λeff) or sensitivity parameters

  15. Calculation of recovery plasticity in multistage hot forging under isothermal conditions.

    Science.gov (United States)

    Zhbankov, Iaroslav G; Perig, Alexander V; Aliieva, Leila I

    2016-01-01

    A widely used method for hot forming steels and alloys, especially heavy forging, is the process of multistage forging with pauses between stages. The well-known effect which accompanies multistage hot forging is metal plasticity recovery in comparison with monotonic deformation. A method which takes into consideration the recovery of plasticity in pauses between hot deformations of a billet under isothermal conditions is proposed. This method allows the prediction of billet forming limits as a function of deformation during the forging stage and the duration of the pause between the stages. This method takes into account the duration of pauses between deformations and the magnitude of subdivided deformations. A hot isothermal upsetting process with pauses was calculated by the proposed method. Results of the calculations have been confirmed with experimental data.

  16. To test, or not to test: time for a MODY calculator?

    Science.gov (United States)

    Njølstad, P R; Molven, A

    2012-05-01

    To test, or not to test, that is often the question in diabetes genetics. This is why the paper of Shields et al in the current issue of Diabetologia is so warmly welcomed. MODY is the most common form of monogenic diabetes. Nevertheless, the optimal way of identifying MODY families still poses a challenge both for researchers and clinicians. Hattersley's group in Exeter, UK, have developed an easy-to-use MODY prediction model that can help to identify cases appropriate for genetic testing. By answering eight simple questions on the internet ( www.diabetesgenes.org/content/mody-probability-calculator ), the doctor receives a positive predictive value in return: the probability that the patient has MODY. Thus, the classical binary (yes/no) assessment provided by clinical diagnostic criteria has been substituted by a more rational, quantitative estimate. The model appears to discriminate well between MODY and type 1 and type 2 diabetes when diabetes is diagnosed before the age of 35 years. However, the performance of the MODY probability calculator should now be validated in other settings than where it was developed-and, as always, there is room for some improvements and modifications.

  17. Collapse susceptibility mapping in karstified gypsum terrain (Sivas basin - Turkey) by conditional probability, logistic regression, artificial neural network models

    Science.gov (United States)

    Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin

    2010-05-01

    This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.

  18. Probability of Criticality for MOX SNF

    International Nuclear Information System (INIS)

    P. Gottlieb

    1999-01-01

    The purpose of this calculation is to provide a conservative (upper bound) estimate of the probability of criticality for mixed oxide (MOX) spent nuclear fuel (SNF) of the Westinghouse pressurized water reactor (PWR) design that has been proposed for use. with the Plutonium Disposition Program (Ref. 1, p. 2). This calculation uses a Monte Carlo technique similar to that used for ordinary commercial SNF (Ref. 2, Sections 2 and 5.2). Several scenarios, covering a range of parameters, are evaluated for criticality. Parameters specifying the loss of fission products and iron oxide from the waste package are particularly important. This calculation is associated with disposal of MOX SNF

  19. Effects of aging on blood pressure variability in resting conditions

    NARCIS (Netherlands)

    Veerman, D. P.; Imholz, B. P.; Wieling, W.; Karemaker, J. M.; van Montfrans, G. A.

    1994-01-01

    The objective of this study was to determine the effect of aging on beat-to-beat blood pressure and pulse interval variability in resting conditions and to determine the effect of aging on the sympathetic and vagal influence on the cardiovascular system by power spectral analysis of blood pressure

  20. Probabilities on Streams and Reflexive Games

    Directory of Open Access Journals (Sweden)

    Andrew Schumann

    2014-01-01

    Full Text Available Probability measures on streams (e.g. on hypernumbers and p-adic numbers have been defined. It was shown that these probabilities can be used for simulations of reflexive games. In particular, it can be proved that Aumann's agreement theorem does not hold for these probabilities. Instead of this theorem, there is a statement that is called the reflexion disagreement theorem. Based on this theorem, probabilistic and knowledge conditions can be defined for reflexive games at various reflexion levels up to the infinite level. (original abstract

  1. Strategy evolution driven by switching probabilities in structured multi-agent systems

    Science.gov (United States)

    Zhang, Jianlei; Chen, Zengqiang; Li, Zhiqi

    2017-10-01

    Evolutionary mechanism driving the commonly seen cooperation among unrelated individuals is puzzling. Related models for evolutionary games on graphs traditionally assume that players imitate their successful neighbours with higher benefits. Notably, an implicit assumption here is that players are always able to acquire the required pay-off information. To relax this restrictive assumption, a contact-based model has been proposed, where switching probabilities between strategies drive the strategy evolution. However, the explicit and quantified relation between a player's switching probability for her strategies and the number of her neighbours remains unknown. This is especially a key point in heterogeneously structured system, where players may differ in the numbers of their neighbours. Focusing on this, here we present an augmented model by introducing an attenuation coefficient and evaluate its influence on the evolution dynamics. Results show that the individual influence on others is negatively correlated with the contact numbers specified by the network topologies. Results further provide the conditions under which the coexisting strategies can be calculated analytically.

  2. Computer code calculations of the TMI-2 accident: initial and boundary conditions

    International Nuclear Information System (INIS)

    Behling, S.R.

    1985-05-01

    Initial and boundary conditions during the Three Mile Island Unit 2 (TMI-2) accident are described and detailed. A brief description of the TMI-2 plant configuration is given. Important contributions to the progression of the accident in the reactor coolant system are discussed. Sufficient information is provided to allow calculation of the TMI-2 accident with computer codes

  3. Qualification of the calculational methods of the fluence in the pressurised water reactors. Improvement of the cross sections treatment by the probability table method; Qualification des methodes de calculs de fluence dans les reacteurs a eau pressurisee. Amelioration du traitement des sections efficaces par la methode des tables de probabilite

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, S H

    1994-01-01

    It is indispensable to know the fluence on the nuclear reactor pressure vessel. The cross sections and their treatment have an important rule to this problem. In this study, two ``benchmarks`` have been interpreted by the Monte Carlo transport program TRIPOLI to qualify the calculational method and the cross sections used in the calculations. For the treatment of the cross sections, the multigroup method is usually used but it exists some problems such as the difficulty to choose the weighting function and the necessity of a great number of energy to represent well the cross section`s fluctuation. In this thesis, we propose a new method called ``Probability Table Method`` to treat the neutron cross sections. For the qualification, a program of the simulation of neutron transport by the Monte Carlo method in one dimension has been written; the comparison of multigroup`s results and probability table`s results shows the advantages of this new method. The probability table has also been introduced in the TRIPOLI program; the calculational results of the iron deep penetration benchmark has been improved by comparing with the experimental results. So it is interest to use this new method in the shielding and neutronic calculation. (author). 42 refs., 109 figs., 36 tabs.

  4. Recent trends in the probability of high out-of-pocket medical expenses in the United States

    Directory of Open Access Journals (Sweden)

    Katherine E Baird

    2016-09-01

    Full Text Available Objective: This article measures the probability that out-of-pocket expenses in the United States exceed a threshold share of income. It calculates this probability separately by individuals’ health condition, income, and elderly status and estimates changes occurring in these probabilities between 2010 and 2013. Data and Method: This article uses nationally representative household survey data on 344,000 individuals. Logistic regressions estimate the probabilities that out-of-pocket expenses exceed 5% and alternatively 10% of income in the two study years. These probabilities are calculated for individuals based on their income, health status, and elderly status. Results: Despite favorable changes in both health policy and the economy, large numbers of Americans continue to be exposed to high out-of-pocket expenditures. For instance, the results indicate that in 2013 over a quarter of nonelderly low-income citizens in poor health spent 10% or more of their income on out-of-pocket expenses, and over 40% of this group spent more than 5%. Moreover, for Americans as a whole, the probability of spending in excess of 5% of income on out-of-pocket costs increased by 1.4 percentage points between 2010 and 2013, with the largest increases occurring among low-income Americans; the probability of Americans spending more than 10% of income grew from 9.3% to 9.6%, with the largest increases also occurring among the poor. Conclusion: The magnitude of out-of-pocket’s financial burden and the most recent upward trends in it underscore a need to develop good measures of the degree to which health care policy exposes individuals to financial risk, and to closely monitor the Affordable Care Act’s success in reducing Americans’ exposure to large medical bills.

  5. Performance of asphaltic concrete incorporating styrene butadiene rubber subjected to varying aging condition

    Science.gov (United States)

    Salah, Faisal Mohammed; Jaya, Ramadhansyah Putra; Mohamed, Azman; Hassan, Norhidayah Abdul; Rosni, Nurul Najihah Mad; Mohamed, Abdullahi Ali; Agussabti

    2017-12-01

    The influence of styrene butadiene rubber (SBR) on asphaltic concrete properties at different aging conditions was presented in this study. These aging conditions were named as un-aged, short-term, and long-term aging. The conventional asphalt binder of penetration grade 60/70 was used in this work. Four different levels of SBR addition were employed (i.e., 0 %, 1 %, 3 %, and 5 % by binder weight). Asphalt concrete mixes were prepared at selected optimum asphalt content (5 %). The performance was evaluated based on Marshall Stability, resilient modulus, and dynamic creep tests. Results indicated the improving stability and permanent deformation characteristics that the mixes modified with SBR polymer have under aging conditions. The result also showed that the stability, resilient modulus, and dynamic creep tests have the highest rates compared to the short-term aging and un-aged samples. Thus, the use of 5 % SBR can produce more durable asphalt concrete mixtures with better serviceability.

  6. Approximate method for calculating heat conditions in the magnetic circuits of transformers and betatrons

    International Nuclear Information System (INIS)

    Loginov, V.S.

    1986-01-01

    A technique for engineering design of two-dimensional stationary temperature field of rectangular cross section blending pile with inner heat release under nonsymmetrical cooling conditions is suggested. Area of its practical application is determined on the basis of experimental data known in literature. Different methods for calculating temperature distribution in betatron magnetic circuit are compared. Graph of maximum temperature calculation error on the basis of approximated expressions with respect to exact solution is given

  7. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  8. Development of energy-saving technologies providing comfortable microclimate conditions for mining

    OpenAIRE

    Б. П. Казаков; Л. Ю. Левин; А. В. Шалимов; А. В. Зайцев

    2017-01-01

    The paper contains analysis of natural and technogenic factors influencing properties of mine atmosphere, defining level of mining safety and probability of emergencies. Main trends in development of energy-saving technologies providing comfortable microclimate conditions are highlighted. A complex of methods and mathematical models has been developed to carry out aerologic and thermophysical calculations. Main ways of improvement for existing calculation methods of stationary and non-station...

  9. The Influence of atmospheric conditions to probabilistic calculation of impact of radiology accident on PWR 1000 MWe

    International Nuclear Information System (INIS)

    Pande Made Udiyani; Sri Kuntjoro

    2015-01-01

    The calculation of the radiological impact of the fission products releases due to potential accidents that may occur in the PWR (Pressurized Water Reactor) is required in a probabilistic. The atmospheric conditions greatly contribute to the dispersion of radionuclides in the environment, so that in this study will be analyzed the influence of atmospheric conditions on probabilistic calculation of the reactor accidents consequences. The objective of this study is to conduct an analysis of the influence of atmospheric conditions based on meteorological input data models on the radiological consequences of PWR 1000 MWe accidents. Simulations using PC-Cosyma code with probabilistic calculations mode, the meteorological data input executed cyclic and stratified, the meteorological input data are executed in the cyclic and stratified, and simulated in Muria Peninsula and Serang Coastal. Meteorological data were taken every hour for the duration of the year. The result showed that the cumulative frequency for the same input models for Serang coastal is higher than the Muria Peninsula. For the same site, cumulative frequency on cyclic input models is higher than stratified models. The cyclic models provide flexibility in determining the level of accuracy of calculations and do not require reference data compared to stratified models. The use of cyclic and stratified models involving large amounts of data and calculation repetition will improve the accuracy of statistical calculation values. (author)

  10. Estimation of reactor core calculation by HELIOS/MASTER at power generating condition through DeCART, whole-core transport code

    International Nuclear Information System (INIS)

    Kim, H. Y.; Joo, H. G.; Kim, K. S.; Kim, G. Y.; Jang, M. H.

    2003-01-01

    The reactivity and power distribution errors of the HELIOS/MASTER core calculation under power generating conditions are assessed using a whole core transport code DeCART. For this work, the cross section tablesets were generated for a medium sized PWR following the standard procedure and two group nodal core calculations were performed. The test cases include the HELIOS calculations for 2-D assemblies at constant thermal conditions, MASTER 3D assembly calculations at power generating conditions, and the core calculations at HZP, HFP, and an abnormal power conditions. In all these cases, the results of the DeCART code in which pinwise thermal feedback effects are incorporated are used as the reference. The core reactivity, assemblywise power distribution, axial power distribution, peaking factor, and thermal feedback effects are then compared. The comparison shows that the error of the HELIOS/MASTER system in the core reactivity, assembly wise power distribution, pin peaking factor are only 100∼300 pcm, 3%, and 2%, respectively. As far as the detailed pinwise power distribution is concerned, however, errors greater than 15% are observed

  11. The Impact of Advanced Age on Driving Safety in Adults with Medical Conditions.

    Science.gov (United States)

    Moon, Sanghee; Ranchet, Maud; Akinwuntan, Abiodun Emmanuel; Tant, Mark; Carr, David Brian; Raji, Mukaila Ajiboye; Devos, Hannes

    2018-01-01

    Adults aged 85 and older, often referred to as the oldest-old, are the fastest-growing segment of the population. The rapidly increasing number of older adults with chronic and multiple medical conditions poses challenges regarding their driving safety. To investigate the effect of advanced age on driving safety in drivers with medical conditions. We categorized 3,425 drivers with preexisting medical conditions into four age groups: middle-aged (55-64 years, n = 1,386), young-old (65-74 years, n = 1,013), old-old (75-84 years, n = 803), or oldest-old (85 years and older, n = 223). All underwent a formal driving evaluation. The outcome measures included fitness to drive recommendation by the referring physician, comprehensive fitness to drive decision from an official driving evaluation center, history of motor vehicle crashes (MVCs), and history of traffic violations. The oldest-old reported more cardiopulmonary and visual conditions, but less neurological conditions than the old-old. Compared to the middle-aged, the oldest-old were more likely to be considered unfit to drive by the referring physicians (odds ratio [OR] = 4.47, 95% confidence interval [CI] 2.20-9.10) and by the official driving evaluation center (OR = 2.74, 95% CI 1.87-4.03). The oldest-old reported more MVCs (OR = 2.79, 95% CI 1.88-4.12) compared to the middle-aged. Advanced age adversely affected driving safety outcomes. The oldest-old are a unique age group with medical conditions known to interfere with safe driving. Driving safety strategies should particularly target the oldest-old since they are the fastest-growing group and their increased frailty is associated with severe or fatal injuries due to MVCs. © 2018 S. Karger AG, Basel.

  12. An Improvement to DCPT: The Particle Transfer Probability as a Function of Particle's Age

    International Nuclear Information System (INIS)

    L. Pan; G. S. Bodvarsson

    2001-01-01

    Multi-scale features of transport processes in fractured porous media make numerical modeling a difficult task of both conceptualization and computation. Dual-continuum particle tracker (DCPT) is an attractive method for modeling large-scale problems typically encountered in the field, such as those in unsaturated zone (UZ) of Yucca Mountain, Nevada. The major advantage is its capability to capture the major features of flow and transport in fractured porous rock (i-e., a fast fracture sub-system combined with a slow matrix sub-system) with reasonable computational resources. However, like other conventional dual-continuum approach-based numerical methods, DCPT (v1.0) is often criticized for failing to capture the transient features of the diffusion depth into the matrix. It may overestimate the transport of tracers through the fractures, especially for the cases with large fracture spacing, and predict artificial early breakthroughs. The objective of this study is to develop a new theory for calculating the particle transfer probability to captures the transient features of the diffusion depth into the matrix within the framework of the dual-continuum random walk particle method (RWPM)

  13. Using Thermal Inactivation Kinetics to Calculate the Probability of Extreme Spore Longevity: Implications for Paleomicrobiology and Lithopanspermia

    Science.gov (United States)

    Nicholson, Wayne L.

    2003-12-01

    Thermal inactivation kinetics with extrapolation were used to model the survival probabilities of spores of various Bacillus species over time periods of millions of years at the historical ambient temperatures (25-40 °) encountered within the 250 million-year-old Salado formation, from which the putative ancient spore-forming bacterium Salibacillus marismortui strain 2-9-3 was recovered. The model indicated extremely low-to-moderate survival probabilities for spores of mesophiles, but surprisingly high survival probabilities for thermophilic spores. The significance of the results are discussed in terms of the survival probabilities of (i) terrestrial spores in ancient geologic samples and (ii) spores transported between planets within impact ejecta.

  14. Future probabilities of coastal floods in Finland

    Science.gov (United States)

    Pellikka, Havu; Leijala, Ulpu; Johansson, Milla M.; Leinonen, Katri; Kahma, Kimmo K.

    2018-04-01

    Coastal planning requires detailed knowledge of future flooding risks, and effective planning must consider both short-term sea level variations and the long-term trend. We calculate distributions that combine short- and long-term effects to provide estimates of flood probabilities in 2050 and 2100 on the Finnish coast in the Baltic Sea. Our distributions of short-term sea level variations are based on 46 years (1971-2016) of observations from the 13 Finnish tide gauges. The long-term scenarios of mean sea level combine postglacial land uplift, regionally adjusted scenarios of global sea level rise, and the effect of changes in the wind climate. The results predict that flooding risks will clearly increase by 2100 in the Gulf of Finland and the Bothnian Sea, while only a small increase or no change compared to present-day conditions is expected in the Bothnian Bay, where the land uplift is stronger.

  15. Contributions to quantum probability

    International Nuclear Information System (INIS)

    Fritz, Tobias

    2010-01-01

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a finite set can occur as the outcome

  16. ASSESSMENT OF CABLE AGING USING CONDITION MONITORING TECHNIQUES

    International Nuclear Information System (INIS)

    GROVE, E.; LOFARO, R.; SOO, P.; VILLARAN, M.; HSU, F.

    2000-01-01

    Electric cables in nuclear power plants suffer degradation during service as a result of the thermal and radiation environments in which they are installed. Instrumentation and control cables are one type of cable that provide an important role in reactor safety. Should the polymeric cable insulation material become embrittled and cracked during service, or during a loss-of-coolant-accident (LOCA) and when steam and high radiation conditions are anticipated, failure could occur and prevent the cables from fulfilling their intended safety function(s). A research program is being conducted at Brookhaven National Laboratory to evaluate condition monitoring (CM) techniques for estimating the amount of cable degradation experienced during in-plant service. The objectives of this program are to assess the ability of the cables to perform under a simulated LOCA without losing their ability to function effectively, and to identify CM techniques which may be used to determine the effective lifetime of cables. The cable insulation materials tested include ethylene propylene rubber (EPR) and cross-linked polyethylene (XLPE). Accelerated aging (thermal and radiation) to the equivalent of 40 years of service was performed, followed by exposure to simulated LOCA conditions. The effectiveness of chemical, electrical, and mechanical condition monitoring techniques are being evaluated. Results indicate that several of these methods can detect changes in material parameters with increasing age. However, each has its limitations, and a combination of methods may provide an effective means for trending cable degradation in order to assess the remaining life of cables

  17. Exploring non-signalling polytopes with negative probability

    International Nuclear Information System (INIS)

    Oas, G; Barros, J Acacio de; Carvalhaes, C

    2014-01-01

    Bipartite and tripartite EPR–Bell type systems are examined via joint quasi-probability distributions where probabilities are permitted to be negative. It is shown that such distributions exist only when the no-signalling condition is satisfied. A characteristic measure, the probability mass, is introduced and, via its minimization, limits the number of quasi-distributions describing a given marginal probability distribution. The minimized probability mass is shown to be an alternative way to characterize non-local systems. Non-signalling polytopes for two to eight settings in the bipartite scenario are examined and compared to prior work. Examining perfect cloning of non-local systems within the tripartite scenario suggests defining two categories of signalling. It is seen that many properties of non-local systems can be efficiently described by quasi-probability theory. (paper)

  18. Assigning probability gain for precursors of four large Chinese earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Cao, T.; Aki, K.

    1983-03-10

    We extend the concept of probability gain associated with a precursor (Aki, 1981) to a set of precursors which may be mutually dependent. Making use of a new formula, we derive a criterion for selecting precursors from a given data set in order to calculate the probability gain. The probabilities per unit time immediately before four large Chinese earthquakes are calculated. They are approximately 0.09, 0.09, 0.07 and 0.08 per day for 1975 Haicheng (M = 7.3), 1976 Tangshan (M = 7.8), 1976 Longling (M = 7.6), and Songpan (M = 7.2) earthquakes, respectively. These results are encouraging because they suggest that the investigated precursory phenomena may have included the complete information for earthquake prediction, at least for the above earthquakes. With this method, the step-by-step approach to prediction used in China may be quantified in terms of the probability of earthquake occurrence. The ln P versus t curve (where P is the probability of earthquake occurrence at time t) shows that ln P does not increase with t linearly but more rapidly as the time of earthquake approaches.

  19. The Misapplication of Probability Theory in Quantum Mechanics

    Science.gov (United States)

    Racicot, Ronald

    2014-03-01

    This article is a revision of two papers submitted to the APS in the past two and a half years. In these papers, arguments and proofs are summarized for the following: (1) The wrong conclusion by EPR that Quantum Mechanics is incomplete, perhaps requiring the addition of ``hidden variables'' for completion. Theorems that assume such ``hidden variables,'' such as Bell's theorem, are also wrong. (2) Quantum entanglement is not a realizable physical phenomenon and is based entirely on assuming a probability superposition model for quantum spin. Such a model directly violates conservation of angular momentum. (3) Simultaneous multiple-paths followed by a quantum particle traveling through space also cannot possibly exist. Besides violating Noether's theorem, the multiple-paths theory is based solely on probability calculations. Probability calculations by themselves cannot possibly represent simultaneous physically real events. None of the reviews of the submitted papers actually refuted the arguments and evidence that was presented. These analyses should therefore be carefully evaluated since the conclusions reached have such important impact in quantum mechanics and quantum information theory.

  20. Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches

    Directory of Open Access Journals (Sweden)

    Michael J. Markham

    2011-07-01

    Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.

  1. On the Hitting Probability of Max-Stable Processes

    OpenAIRE

    Hofmann, Martin

    2012-01-01

    The probability that a max-stable process {\\eta} in C[0, 1] with identical marginal distribution function F hits x \\in R with 0 < F (x) < 1 is the hitting probability of x. We show that the hitting probability is always positive, unless the components of {\\eta} are completely dependent. Moreover, we consider the event that the paths of standard MSP hit some x \\in R twice and we give a sufficient condition for a positive probability of this event.

  2. Age and growth of the longfin eel, Anguilla mossambica Peters ...

    African Journals Online (AJOL)

    Otoliths were successfully used for age determination and growth-rate calculation of the longfin eel, Anguilla mossambica Peters, 1852. The large opaque nucleus of the otoliths represents the leptocephalid stage and probably lasts from one and a half to two years. Thereafter, one opaque and one hyaline zone is deposited ...

  3. CONDOR: neutronic code for fuel elements calculation with rods

    International Nuclear Information System (INIS)

    Villarino, E.A.

    1990-01-01

    CONDOR neutronic code is used for the calculation of fuel elements formed by fuel rods. The method employed to obtain the neutronic flux is that of collision probabilities in a multigroup scheme on two-dimensional geometry. This code utilizes new calculation algorithms and normalization of such collision probabilities. Burn-up calculations can be made before the alternative of applying variational methods for response flux calculations or those corresponding to collision normalization. (Author) [es

  4. Calculation of the pipes failure probability of the Rcic system of a nuclear power station by means of software WinPRAISE 07

    International Nuclear Information System (INIS)

    Jasso G, J.; Diaz S, A.; Mendoza G, G.; Sainz M, E.; Garcia de la C, F. M.

    2014-10-01

    The growth and the cracks propagation by fatigue are a typical degradation mechanism that is presented in the nuclear industry as in the conventional industry; the unstable propagation of a crack can cause the catastrophic failure of a metallic component even with high ductility; for this reason, activities of programmed maintenance have been established in the industry using inspection and visual techniques and/or ultrasound with an established periodicity allowing to follow up to these growths, controlling the undesirable effects; however, these activities increase the operation costs; and in the peculiar case of the nuclear industry, they increase the radiation exposure to the participant personnel. The use of mathematical processes that integrate concepts of uncertainty, material properties and the probability associated to the inspection results, has been constituted as a powerful tool of evaluation of the component reliability, reducing costs and exposure levels. In this work the evaluation of the failure probability by cracks growth preexisting by fatigue is presented, in pipes of a Reactor Core Isolation Cooling system (Rcic) in a nuclear power station. The software WinPRAISE 07 (Piping Reliability Analysis Including Seismic Events) was used supported in the probabilistic fracture mechanics principles. The obtained values of failure probability evidenced a good behavior of the analyzed pipes with a maximum order of 1.0 E-6, therefore is concluded that the performance of the lines of these pipes is reliable even extrapolating the calculations at 10, 20, 30 and 40 years of service. (Author)

  5. Psychiatric and Medical Conditions in Transition-Aged Individuals With ASD.

    Science.gov (United States)

    Davignon, Meghan N; Qian, Yinge; Massolo, Maria; Croen, Lisa A

    2018-04-01

    Children with autism spectrum disorder (ASD) have a variety of medical and psychiatric conditions and an increased use of health care services. There is limited information about the prevalence of psychiatric and medical conditions in adolescents and young adults with ASD. Our objective was to describe the frequency of medical and psychiatric conditions in a large population of diverse, insured transition-aged individuals with ASD. Participants included Kaiser Permanente Northern California members who were enrolled from 2013 to 2015 and who were 14 to 25 years old. Individuals with ASD ( n = 4123) were compared with peers with attention-deficit/hyperactivity disorder ( n = 20 615), diabetes mellitus ( n = 2156), and typical controls with neither condition ( n = 20 615). Over one-third (34%) of individuals with ASD had a co-occurring psychiatric condition; the most commonly reported medical conditions included infections (42%), obesity (25%), neurologic conditions (18%), allergy and/or immunologic conditions (16%), musculoskeletal conditions (15%), and gastrointestinal (11%) conditions. After controlling for sex, age, race, and duration of Kaiser Permanente Northern California membership, most psychiatric conditions were significantly more common in the ASD group than in each comparison group, and most medical conditions were significantly more common in the ASD group than in the attention-deficit/hyperactivity disorder and typical control groups but were similar to or significantly less common than the diabetes mellitus group. Although more research is needed to identify factors contributing to this excess burden of disease, there is a pressing need for all clinicians to approach ASD as a chronic health condition requiring regular follow-up and routine screening and treatment of medical and psychiatric issues. Copyright © 2018 by the American Academy of Pediatrics.

  6. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices

    International Nuclear Information System (INIS)

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-01-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan

  7. The Use of Conditional Probability Integral Transformation Method for Testing Accelerated Failure Time Models

    Directory of Open Access Journals (Sweden)

    Abdalla Ahmed Abdel-Ghaly

    2016-06-01

    Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.

  8. Non-equilibrium random matrix theory. Transition probabilities

    International Nuclear Information System (INIS)

    Pedro, Francisco Gil; Westphal, Alexander

    2016-06-01

    In this letter we present an analytic method for calculating the transition probability between two random Gaussian matrices with given eigenvalue spectra in the context of Dyson Brownian motion. We show that in the Coulomb gas language, in large N limit, memory of the initial state is preserved in the form of a universal linear potential acting on the eigenvalues. We compute the likelihood of any given transition as a function of time, showing that as memory of the initial state is lost, transition probabilities converge to those of the static ensemble.

  9. Non-equilibrium random matrix theory. Transition probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Pedro, Francisco Gil [Univ. Autonoma de Madrid (Spain). Dept. de Fisica Teorica; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie

    2016-06-15

    In this letter we present an analytic method for calculating the transition probability between two random Gaussian matrices with given eigenvalue spectra in the context of Dyson Brownian motion. We show that in the Coulomb gas language, in large N limit, memory of the initial state is preserved in the form of a universal linear potential acting on the eigenvalues. We compute the likelihood of any given transition as a function of time, showing that as memory of the initial state is lost, transition probabilities converge to those of the static ensemble.

  10. Mode of delivery and the probability of subsequent childbearing: a population-based register study.

    Science.gov (United States)

    Elvander, C; Dahlberg, J; Andersson, G; Cnattingius, S

    2015-11-01

    To investigate the relationship between mode of first delivery and probability of subsequent childbearing. Population-based study. Nationwide study in Sweden. A cohort of 771 690 women who delivered their first singleton infant in Sweden between 1992 and 2010. Using Cox's proportional-hazards regression models, risks of subsequent childbearing were compared across four modes of delivery. Hazard ratios (HRs) were calculated, using 95% confidence intervals (95% CIs). Probability of having a second and third child; interpregnancy interval. Compared with women who had a spontaneous vaginal first delivery, women who delivered by vacuum extraction were less likely to have a second pregnancy (HR 0.96, 95% CI 0.95-0.97), and the probabilities of a second childbirth were substantially lower among women with a previous emergency caesarean section (HR 0.85, 95% CI 0.84-0.86) or an elective caesarean section (HR 0.82, 95% CI 0.80-0.83). There were no clinically important differences in the median time between first and second pregnancy by mode of first delivery. Compared with women younger than 30 years of age, older women were more negatively affected by a vacuum extraction with respect to the probability of having a second child. A primary vacuum extraction decreased the probability of having a third child by 4%, but having two consecutive vacuum extraction deliveries did not further alter the probability. A first delivery by vacuum extraction does not reduce the probability of subsequent childbearing to the same extent as a first delivery by emergency or elective caesarean section. © 2014 Royal College of Obstetricians and Gynaecologists.

  11. ICPP - a collision probability module for the AUS neutronics code system

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1985-10-01

    The isotropic collision probability program (ICPP) is a module of the AUS neutronics code system which calculates first flight collision probabilities for neutrons in one-dimensional geometries and in clusters of rods. Neutron sources, including scattering, are assumed to be isotropic and to be spatially flat within each mesh interval. The module solves the multigroup collision probability equations for eigenvalue or fixed source problems

  12. Survival and compound nucleus probability of super heavy element Z = 117

    Energy Technology Data Exchange (ETDEWEB)

    Manjunatha, H.C. [Government College for Women, Department of Physics, Kolar, Karnataka (India); Sridhar, K.N. [Government First grade College, Department of Physics, Kolar, Karnataka (India)

    2017-05-15

    As a part of a systematic study for predicting the most suitable projectile-target combinations for heavy-ion fusion experiments in the synthesis of {sup 289-297}Ts, we have calculated the transmission probability (T{sub l}), compound nucleus formation probabilities (P{sub CN}) and survival probability (P{sub sur}) of possible projectile-target combinations. We have also studied the fusion cross section, survival cross section and fission cross sections for different projectile-target combination of {sup 289-297}Ts. These theoretical parameters are required before the synthesis of the super heavy element. The calculated probabilities and cross sections show that the production of isotopes of the super heavy element with Z = 117 is strongly dependent on the reaction systems. The most probable reactions to synthetize the super heavy nuclei {sup 289-297}Ts are worked out and listed explicitly. We have also studied the variation of P{sub CN} and P{sub sur} with the mass number of projectile and target nuclei. This work is useful in the synthesis of the super heavy element Z = 117. (orig.)

  13. Decreased Serum Lipids in Patients with Probable Alzheimer´s Disease

    Directory of Open Access Journals (Sweden)

    Orhan Lepara

    2009-08-01

    Full Text Available Alzheimer’s disease (AD is a multifactorial disease but its aetiology and pathophisiology are still not fully understood. Epidemiologic studies examining the association between lipids and dementia have reported conflicting results. High total cholesterol has been associated with both an increased, and decreased, risk of AD and/or vascular dementia (VAD, whereas other studies found no association. The aim of this study was to investigate the serum lipids concentration in patients with probable AD, as well as possible correlation between serum lipids concentrations and cognitive impairment.Our cross-sectional study included 30 patients with probable AD and 30 age and sex matched control subjects. The probable AD was clinically diagnosed by NINCDS-ADRDA criteria. Serum total cholesterol (TC, high-density lipoprotein cholesterol (HDL-C and triglyceride (TG levels were determined at the initial assessment using standard enzymatic colorimetric techniques. Low-den- sity lipoprotein cholesterol (LDL-C and very low density lipoprotein cholesterol (VLDL-C levels were calculated. Subjects with probable AD had significantly lower serum TG (p<0,01, TC (p<0,05, LDL-C (p<0,05 and VLDL-C (p<0,01 compared to the control group. We did not observe signifi-cant difference in HDL-C level between patients with probable AD and control subjects. Negative, although not significant correlation between TG, TC and VLDL-C and MMSE in patients with AD was observed. In the control group of subjects there was a negative correlation between TC and MMSE but it was not statistically significant (r = -0,28. Further studies are required to explore the possibility for serum lipids to serve as diagnostic and therapeutic markers of AD.

  14. Study on risk indicator for appropriate plant maintenance considering aging effect

    International Nuclear Information System (INIS)

    Mano, Akihiro; Yamaguchi, Akira; Takata, Takashi

    2014-01-01

    Since, nuclear power plants run for a long term and plant component may deteriorate due to aging, plant safety must be maintained through maintenance activities of components. The maintenance will become more important as the number of aged plant increases. In the planning of maintenance, one must select appropriate components and interval. In general, Fussell-Vesely importance (FV) and Risk Achievement Worth (RAW) are used as a risk indicator for the maintenance. A priority order of each component can be evaluated using those risk indicators at a certain condition. However, the influence of aging (time history) on the order cannot be estimated directly. In the paper, a change of conditional core damage probability (ΔCCDP) and a change of conditional containment failure probability (ΔCCFP) are proposed as additional indicators in which the aging effect is evaluated directly so as to determine the priority order. A simplified level one probabilistic risk assessment (PRA) has been carried out in order to investigate the change of the risk indicators by considering the change of a component failure probability due to aging. In the analyses, three conditions are assumed; base (original) state, aging state and further aging state without maintenance activities. It is demonstrated that the proposed indicator (ΔCCDP and ΔCCFP) reveals the aging effect of each component, while the change of the FV and RAW represent unrealistic behavior through the states. As a result, it is found that the ΔCCDP and ΔCCFP are superior to the others in terms of the ability to evaluate components appropriately in deteriorated (aging) states and take account of differences of deterioration behavior. It is also found that the priority order of the multiple-components maintenance at the same time can also be evaluated using the ΔCCDP and ΔCCFP. Additionally, a risk informed decision making based on the risk acceptance criteria can be discussed for the maintenance procedure using the

  15. Swedish earthquakes and acceleration probabilities

    International Nuclear Information System (INIS)

    Slunga, R.

    1979-03-01

    A method to assign probabilities to ground accelerations for Swedish sites is described. As hardly any nearfield instrumental data is available we are left with the problem of interpreting macroseismic data in terms of acceleration. By theoretical wave propagation computations the relation between seismic strength of the earthquake, focal depth, distance and ground accelerations are calculated. We found that most Swedish earthquake of the area, the 1904 earthquake 100 km south of Oslo, is an exception and probably had a focal depth exceeding 25 km. For the nuclear power plant sites an annual probability of 10 -5 has been proposed as interesting. This probability gives ground accelerations in the range 5-20 % for the sites. This acceleration is for a free bedrock site. For consistency all acceleration results in this study are given for bedrock sites. When applicating our model to the 1904 earthquake and assuming the focal zone to be in the lower crust we get the epicentral acceleration of this earthquake to be 5-15 % g. The results above are based on an analyses of macrosismic data as relevant instrumental data is lacking. However, the macroseismic acceleration model deduced in this study gives epicentral ground acceleration of small Swedish earthquakes in agreement with existent distant instrumental data. (author)

  16. Probable impact of age and hypoxia on proliferation and microRNA expression profile of bone marrow-derived human mesenchymal stem cells

    Directory of Open Access Journals (Sweden)

    Norlaily Mohd Ali

    2016-01-01

    Full Text Available Decline in the therapeutic potential of bone marrow-derived mesenchymal stem cells (MSC is often seen with older donors as compared to young. Although hypoxia is known as an approach to improve the therapeutic potential of MSC in term of cell proliferation and differentiation capacity, its effects on MSC from aged donors have not been well studied. To evaluate the influence of hypoxia on different age groups, MSC from young (60 years donors were expanded under hypoxic (5% O2 and normal (20% O2 culture conditions. MSC from old donors exhibited a reduction in proliferation rate and differentiation potential together with the accumulation of senescence features compared to that of young donors. However, MSC cultured under hypoxic condition showed enhanced self-renewing and proliferation capacity in both age groups as compared to normal condition. Bioinformatic analysis of the gene ontology (GO and KEGG pathway under hypoxic culture condition identified hypoxia-inducible miRNAs that were found to target transcriptional activity leading to enhanced cell proliferation, migration as well as decrease in growth arrest and apoptosis through the activation of multiple signaling pathways. Overall, differentially expressed miRNA provided additional information to describe the biological changes of young and aged MSCs expansion under hypoxic culture condition at the molecular level. Based on our findings, the therapeutic potential hierarchy of MSC according to donor’s age group and culture conditions can be categorized in the following order: young (hypoxia > young (normoxia > old aged (hypoxia > old aged (normoxia.

  17. Spallation reactions: calculations

    International Nuclear Information System (INIS)

    Bertini, H.W.

    1975-01-01

    Current methods for calculating spallation reactions over various energy ranges are described and evaluated. Recent semiempirical fits to existing data will probably yield the most accurate predictions for these reactions in general. However, if the products in question have binding energies appreciably different from their isotropic neighbors and if the cross section is approximately 30 mb or larger, then the intranuclear-cascade-evaporation approach is probably better suited. (6 tables, 12 figures, 34 references) (U.S.)

  18. Transformation of even-aged European beech (Fagus sylvatica L.) to uneven-aged management under changing growth conditions caused by climate change

    DEFF Research Database (Denmark)

    Schou, Erik; Meilby, Henrik

    2013-01-01

    Transformation from even-aged to uneven-aged forest management is currently taking place throughout Europe. Climate change is, however, expected to change growth conditions—possibly quite radically. Using a deterministic approach, it was the objective of this study to investigate the influence...... of such changes on optimal transformation strategies for an even-aged stand of European Beech in Denmark. For a range of growth change scenarios, represented by changes in site index, optimal harvest policies were determined using a matrix modelling approach and a differential evolution algorithm. Transition...... probabilities were updated continuously based on stand level variables and the transition matrix was thus dynamic. With optimal transformation policies, stand development followed similar pathways during the transformation phase irrespective of climate change scenario. Optimal transformation policies were thus...

  19. Generalized Probability-Probability Plots

    NARCIS (Netherlands)

    Mushkudiani, N.A.; Einmahl, J.H.J.

    2004-01-01

    We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

  20. Quantum Probabilities as Behavioral Probabilities

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2017-03-01

    Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

  1. Electrical-Based Diagnostic Techniques for Assessing Insulation Condition in Aged Transformers

    Directory of Open Access Journals (Sweden)

    Issouf Fofana

    2016-08-01

    Full Text Available The condition of the internal cellulosic paper and oil insulation are of concern for the performance of power transformers. Over the years, a number of methods have been developed to diagnose and monitor the degradation/aging of the transformer internal insulation system. Some of this degradation/aging can be assessed from electrical responses. Currently there are a variety of electrical-based diagnostic techniques available for insulation condition monitoring of power transformers. In most cases, the electrical signals being monitored are due to mechanical or electric changes caused by physical changes in resistivity, inductance or capacitance, moisture, contamination or aging by-products in the insulation. This paper presents a description of commonly used and modern electrical-based diagnostic techniques along with their interpretation schemes.

  2. Groundwater flow modelling under ice sheet conditions. Scoping calculations

    International Nuclear Information System (INIS)

    Jaquet, O.; Namar, R.; Jansson, P.

    2010-10-01

    The potential impact of long-term climate changes has to be evaluated with respect to repository performance and safety. In particular, glacial periods of advancing and retreating ice sheet and prolonged permafrost conditions are likely to occur over the repository site. The growth and decay of ice sheets and the associated distribution of permafrost will affect the groundwater flow field and its composition. As large changes may take place, the understanding of groundwater flow patterns in connection to glaciations is an important issue for the geological disposal at long term. During a glacial period, the performance of the repository could be weakened by some of the following conditions and associated processes: - Maximum pressure at repository depth (canister failure). - Maximum permafrost depth (canister failure, buffer function). - Concentration of groundwater oxygen (canister corrosion). - Groundwater salinity (buffer stability). - Glacially induced earthquakes (canister failure). Therefore, the GAP project aims at understanding key hydrogeological issues as well as answering specific questions: - Regional groundwater flow system under ice sheet conditions. - Flow and infiltration conditions at the ice sheet bed. - Penetration depth of glacial meltwater into the bedrock. - Water chemical composition at repository depth in presence of glacial effects. - Role of the taliks, located in front of the ice sheet, likely to act as potential discharge zones of deep groundwater flow. - Influence of permafrost distribution on the groundwater flow system in relation to build-up and thawing periods. - Consequences of glacially induced earthquakes on the groundwater flow system. Some answers will be provided by the field data and investigations; the integration of the information and the dynamic characterisation of the key processes will be obtained using numerical modelling. Since most of the data are not yet available, some scoping calculations are performed using the

  3. Groundwater flow modelling under ice sheet conditions. Scoping calculations

    Energy Technology Data Exchange (ETDEWEB)

    Jaquet, O.; Namar, R. (In2Earth Modelling Ltd (Switzerland)); Jansson, P. (Dept. of Physical Geography and Quaternary Geology, Stockholm Univ., Stockholm (Sweden))

    2010-10-15

    The potential impact of long-term climate changes has to be evaluated with respect to repository performance and safety. In particular, glacial periods of advancing and retreating ice sheet and prolonged permafrost conditions are likely to occur over the repository site. The growth and decay of ice sheets and the associated distribution of permafrost will affect the groundwater flow field and its composition. As large changes may take place, the understanding of groundwater flow patterns in connection to glaciations is an important issue for the geological disposal at long term. During a glacial period, the performance of the repository could be weakened by some of the following conditions and associated processes: - Maximum pressure at repository depth (canister failure). - Maximum permafrost depth (canister failure, buffer function). - Concentration of groundwater oxygen (canister corrosion). - Groundwater salinity (buffer stability). - Glacially induced earthquakes (canister failure). Therefore, the GAP project aims at understanding key hydrogeological issues as well as answering specific questions: - Regional groundwater flow system under ice sheet conditions. - Flow and infiltration conditions at the ice sheet bed. - Penetration depth of glacial meltwater into the bedrock. - Water chemical composition at repository depth in presence of glacial effects. - Role of the taliks, located in front of the ice sheet, likely to act as potential discharge zones of deep groundwater flow. - Influence of permafrost distribution on the groundwater flow system in relation to build-up and thawing periods. - Consequences of glacially induced earthquakes on the groundwater flow system. Some answers will be provided by the field data and investigations; the integration of the information and the dynamic characterisation of the key processes will be obtained using numerical modelling. Since most of the data are not yet available, some scoping calculations are performed using the

  4. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  5. Aging and condition monitoring of electric cables in nuclear power plants

    International Nuclear Information System (INIS)

    Lofaro, R.J.; Grove, E.; Soo, P.

    1998-05-01

    There are a variety of environmental stressors in nuclear power plants that can influence the aging rate of components; these include elevated temperatures, high radiation fields, and humid conditions. Exposure to these stressors over long periods of time can cause degradation of components that may go undetected unless the aging mechanisms are identified and monitored. In some cases the degradation may be mitigated by maintenance or replacement. However, some components receive neither and are thus more susceptible to aging degradation, which might lead to failure. One class of components that falls in this category is electric cables. Cables are very often overlooked in aging analyses since they are passive components that require no maintenance. However, they are very important components since they provide power to safety related equipment and transmit signals to and from instruments and controls. This paper will look at the various aging mechanisms and failure modes associated with electric cables. Condition monitoring techniques that may be useful for monitoring degradation of cables will also be discussed

  6. Handwriting, visuomotor integration, and neurological condition at school age

    NARCIS (Netherlands)

    van Hoorn, Jessika F.; Maathuis, Carel G. B.; Peters, Lieke H. J.; Hadders-Algra, Mijna

    2010-01-01

    Aim The study investigated the relationships between handwriting, visuomotor integration, and neurological condition. We paid particular attention to the presence of minor neurological dysfunction (MND). Method Participants were 200 children (131 males, 69 females; age range 8-13y) of whom 118

  7. Computation of Probabilities in Causal Models of History of Science

    Directory of Open Access Journals (Sweden)

    Osvaldo Pessoa Jr.

    2006-12-01

    Full Text Available : The aim of this paper is to investigate the ascription of probabilities in a causal model of an episode in the history of science. The aim of such a quantitative approach is to allow the implementation of the causal model in a computer, to run simulations. As an example, we look at the beginning of the science of magnetism, “explaining” — in a probabilistic way, in terms of a single causal model — why the field advanced in China but not in Europe (the difference is due to different prior probabilities of certain cultural manifestations. Given the number of years between the occurrences of two causally connected advances X and Y, one proposes a criterion for stipulating the value pY=X of the conditional probability of an advance Y occurring, given X. Next, one must assume a specific form for the cumulative probability function pY=X(t, which we take to be the time integral of an exponential distribution function, as is done in physics of radioactive decay. Rules for calculating the cumulative functions for more than two events are mentioned, involving composition, disjunction and conjunction of causes. We also consider the problems involved in supposing that the appearance of events in time follows an exponential distribution, which are a consequence of the fact that a composition of causes does not follow an exponential distribution, but a “hypoexponential” one. We suggest that a gamma distribution function might more adequately represent the appearance of advances.

  8. Contributions to quantum probability

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Tobias

    2010-06-25

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a

  9. Numerical determination of transmission probabilities in cylindrical geometry

    International Nuclear Information System (INIS)

    Queiroz Bogado Leite, S. de.

    1989-11-01

    Efficient methods for numerical calculation of transmission probabilities in cylindrical geometry are presented. Relative errors of the order of 10 -5 or smaller are obtained using analytical solutions and low order quadrature integration schemes. (author) [pt

  10. Risk, reward, and decision-making in a rodent model of cognitive aging

    Directory of Open Access Journals (Sweden)

    Ryan J Gilbert

    2012-01-01

    Full Text Available Impaired decision-making in aging can directly impact factors (financial security, quality of healthcare that are critical to maintaining quality of life and independence at advanced ages. Naturalistic rodent models mimic human aging in other cognitive domains, and afford the opportunity to parse the effects of age on discrete aspects of decision-making in a manner relatively uncontaminated by experiential factors. Young adult (5-7 mo. and aged (23-25 mo. male F344 rats were trained on a probability discounting task in which they made discrete-trial choices between a small certain reward (1 food pellet and a large but uncertain reward (2 food pellets with varying probabilities of delivery ranging from 100% to 0%. Young rats chose the large reward when it was associated with a high probability of delivery and shifted to the smaller but certain reward as probability of the large reward decreased. As a group, aged rats performed comparably to young, but there was significantly greater variance among aged rats. One subgroup of aged rats showed strong preference for the small certain reward. This preference was maintained under conditions in which large reward delivery was certain, suggesting decreased sensitivity to reward magnitude. In contrast, another subgroup of aged rats showed strong preference for the large reward at low probabilities of delivery. Interestingly, this subgroup also showed elevated preference for probabilistic rewards when reward magnitudes were equalized. Previous findings using this same aged study population described strongly attenuated discounting of delayed rewards with age, together suggesting that a subgroup of aged rats may have deficits associated with accounting for costs (i.e., delay, probability. These deficits in cost-accounting were dissociable from the age-related differences in sensitivity to reward magnitude, suggesting that aging influences multiple, distinct neural mechanisms that can impact cost

  11. Risk, reward, and decision-making in a rodent model of cognitive aging.

    Science.gov (United States)

    Gilbert, Ryan J; Mitchell, Marci R; Simon, Nicholas W; Bañuelos, Cristina; Setlow, Barry; Bizon, Jennifer L

    2011-01-01

    Impaired decision-making in aging can directly impact factors (financial security, health care) that are critical to maintaining quality of life and independence at advanced ages. Naturalistic rodent models mimic human aging in other cognitive domains, and afford the opportunity to parse the effects of age on discrete aspects of decision-making in a manner relatively uncontaminated by experiential factors. Young adult (5-7 months) and aged (23-25 months) male F344 rats were trained on a probability discounting task in which they made discrete-trial choices between a small certain reward (one food pellet) and a large but uncertain reward (two food pellets with varying probabilities of delivery ranging from 100 to 0%). Young rats chose the large reward when it was associated with a high probability of delivery and shifted to the small but certain reward as probability of the large reward decreased. As a group, aged rats performed comparably to young, but there was significantly greater variance among aged rats. One subgroup of aged rats showed strong preference for the small certain reward. This preference was maintained under conditions in which large reward delivery was also certain, suggesting decreased sensitivity to reward magnitude. In contrast, another subgroup of aged rats showed strong preference for the large reward at low probabilities of delivery. Interestingly, this subgroup also showed elevated preference for probabilistic rewards when reward magnitudes were equalized. Previous findings using this same aged study population described strongly attenuated discounting of delayed rewards with age, together suggesting that a subgroup of aged rats may have deficits associated with accounting for reward costs (i.e., delay or probability). These deficits in cost-accounting were dissociable from the age-related differences in sensitivity to reward magnitude, suggesting that aging influences multiple, distinct mechanisms that can impact cost-benefit decision-making.

  12. Connections rigidity effect on probability of fracture in steel moment frames

    Directory of Open Access Journals (Sweden)

    Gholamreza Abdollahzadeh

    2017-08-01

    Full Text Available Connections in steel moment frames are idealized in full pinned and full rigid conditions. Because with this assumption, in spite of real behavior of connection, real story drifts are less anticipated and maybe frame is designed without performance of bracing. There are several methods for modeling actual behavior of semi rigid connections. In this method a connection with certain rigidity is modeled by a rotational spring with corresponding stiffness. This stiffness is achieved by certain formula. In other words, each percent of rigidity corresponds to one rotational spring stiffness. In this research in order to evaluate the real behavior of connection in analysis and designing process and fracture probability one frame including four stories and one bay with three types of connection has been modeled and designed in ETABS. Each model has an individual rigidity which is equal to 10, 75 and 90 percent. With respect to maximum drift and different PGA in roof, probabilities of low, medium, high and complete fracture were calculated. For this purpose, with applying different PGA to modeled frames, amounts of drift in the roof are achieved. Then these values are compared with given values in American code. Finally, investigation showed that when rigidity in frame connections increases, the probability of frame fracture decreases. In other words, fully rigid assumption of connection in analysis process leads to decreasing in real probability of fracture in frames which is a noticeable risk in building designing processes.

  13. Frequency of vaginal conditional-pathogenic microflora dependency from age in conditions of normocenosis and disbiosis

    OpenAIRE

    Gruzevsky, A. A.; Zyablitsev, S. V.; Chernobrivtsev, P. A.

    2017-01-01

    Gruzevsky A. A., Zyablitsev S. V., Chernobrivtsev P. A. Frequency of vaginal conditional-pathogenic microflora dependency from age in conditions of normocenosis and disbiosis. Journal of Education, Health and Sport. 2017;7(2):509-522. eISSN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.399315 http://ojs.ukw.edu.pl/index.php/johs/article/view/4360 The journal has had 7 points in Ministry of Science and Higher Education parametric evaluation. Part B item 1223 (...

  14. Effects of surface conditioning on repair bond strengths of non-aged and aged microhybrid, nanohybrid, and nanofilled composite resins

    NARCIS (Netherlands)

    Rinastiti, Margareta; Siswomihardjo, Widowati; Busscher, Henk J.; Ozcan, Mutlu

    2011-01-01

    This study evaluates effects of aging on repair bond strengths of microhybrid, nanohybrid, and nanofilled composite resins and characterizes the interacting surfaces after aging. Disk-shaped composite specimens were assigned to one of three aging conditions: (1) thermocycling (5,000x, 5-55 degrees

  15. A Non-parametric Method for Calculating Conditional Stressed Value at Risk

    Directory of Open Access Journals (Sweden)

    Kohei Marumo

    2017-01-01

    Full Text Available We consider the Value at Risk (VaR of a portfolio under stressed conditions. In practice, the stressed VaR (sVaR is commonly calculated using the data set that includes the stressed period. It tells us how much the risk amount increases if we use the stressed data set. In this paper, we consider the VaR under stress scenarios. Technically, this can be done by deriving the distribution of profit or loss conditioned on the value of risk factors. We use two methods; the one that uses the linear model and the one that uses the Hermite expansion discussed by Marumo and Wolff (2013, 2016. Numerical examples shows that the method using the Hermite expansion is capable of capturing the non-linear effects such as correlation collapse and volatility clustering, which are often observed in the markets.

  16. Statistics concerning the Apollo command module water landing, including the probability of occurrence of various impact conditions, sucessful impact, and body X-axis loads

    Science.gov (United States)

    Whitnah, A. M.; Howes, D. B.

    1971-01-01

    Statistical information for the Apollo command module water landings is presented. This information includes the probability of occurrence of various impact conditions, a successful impact, and body X-axis loads of various magnitudes.

  17. On the use of mean groundwater age, life expectancy and capture probability for defining aquifer vulnerability and time-of-travel zones for source water protection.

    Science.gov (United States)

    Molson, J W; Frind, E O

    2012-01-01

    Protection and sustainability of water supply wells requires the assessment of vulnerability to contamination and the delineation of well capture zones. Capture zones, or more generally, time-of-travel zones corresponding to specific contaminant travel times, are most commonly delineated using advective particle tracking. More recently, the capture probability approach has been used in which a probability of capture of P=1 is assigned to the well and the growth of a probability-of-capture plume is tracked backward in time using an advective-dispersive transport model. This approach accounts for uncertainty due to local-scale heterogeneities through the use of macrodispersion. In this paper, we develop an alternative approach to capture zone delineation by applying the concept of mean life expectancy E (time remaining before being captured by the well), and we show how life expectancy E is related to capture probability P. Either approach can be used to delineate time-of-travel zones corresponding to specific travel times, as well as the ultimate capture zone. The related concept of mean groundwater age A (time since recharge) can also be applied in the context of defining the vulnerability of a pumped aquifer. In the same way as capture probability, mean life expectancy and groundwater age account for local-scale uncertainty or unresolved heterogeneities through macrodispersion, which standard particle tracking neglects. The approach is tested on 2D and 3D idealized systems, as well as on several watershed-scale well fields within the Regional Municipality of Waterloo, Ontario, Canada. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Gluon saturation: Survival probability for leading neutrons in DIS

    International Nuclear Information System (INIS)

    Levin, Eugene; Tapia, Sebastian

    2012-01-01

    In this paper we discuss the example of one rapidity gap process: the inclusive cross sections of the leading neutrons in deep inelastic scattering with protons (DIS). The equations for this process are proposed and solved, giving the example of theoretical calculation of the survival probability for one rapidity gap processes. It turns out that the value of the survival probability is small and it decreases with energy.

  19. INFORMATION-ANALYTICAL SYSTEM OF FORECAST VEGETATION FIRES IN NATURAL CONDITIONS

    Directory of Open Access Journals (Sweden)

    R. M. Kogan

    2015-01-01

    Full Text Available A system for spatial prediction for fire danger as function of weather and pyrological vegetation characteristics was constructed. The method of calculating the time conducted vegetable combustible materials in fire condition of each month of the season was suggested. Calculate the probability of fires and danger periods of plant formations in a monsoon climate. The geographic information system was developed, it was tested in the Middle Amur region in the Russian Far East.

  20. Construction of computational program of aging in insulating materials for searching reversed sequential test conditions to give damage equivalent to simultaneous exposure of heat and radiation

    International Nuclear Information System (INIS)

    Fuse, Norikazu; Homma, Hiroya; Okamoto, Tatsuki

    2013-01-01

    Two consecutive numerical calculations on degradation of polymeric insulations under thermal and radiation environment are carried out to simulate so-called reversal sequential acceleration test. The aim of the calculation is to search testing conditions which provide material damage equivalent to the case of simultaneous exposure of heat and radiation. At least following four parameters are needed to be considered in the sequential method; dose rate and exposure time in radiation, as well as temperature and aging time in heating. The present paper discusses the handling of these parameters and shows some trial calculation results. (author)

  1. [Socio-demographics characteristics and health conditions of older homeless persons of Lima, Peru].

    Science.gov (United States)

    Moquillaza-Risco, Marlene; León, Elsa; Dongo, Mario; Munayco, César V

    2015-10-01

    Determine the socio-demographics characteristics and health conditions of older homeless persons at the time of enrollment into the National Program "Vida Digna" and the probability of functional dependency by age, and stratified by gender and cognitive impairment. MATERIALS ANDE METHODS: We performed a cross sectional study, reviewing all registration forms of the program in order to identify socio-demographic variables and health conditions of older homeless persons at the time of enrollment in the program. We did a descriptive analysis of the socio-demographic variables and we also determined the frequency of health conditions. Furthermore, we determined the probability of functional dependency by age, and stratified by gender and cognitive impairment through a logistic regression model. The older homeless persons at the time of enrollment in the program were mostly single men, with a primary education or no education. The study subjects had a high frequency of chronic and mental diseases. 50% of them had certain level of functional impairment and roughly 70% had a certain level of cognitive impairment. The probability of functional dependency increased by age, and it was higher in women than in men. This probability increased according to the level of cognitive impairment. This study shows that older homeless persons are a vulnerable population not only because they live outdoors but also because they a have also for the high prevalence of chronic and mental diseases. These diseases prevent the homeless persons from living by themselves special care to overcome their situations.

  2. Dependency models and probability of joint events

    International Nuclear Information System (INIS)

    Oerjasaeter, O.

    1982-08-01

    Probabilistic dependencies between components/systems are discussed with reference to a broad classification of potential failure mechanisms. Further, a generalized time-dependency model, based on conditional probabilities for estimation of the probability of joint events and event sequences is described. The applicability of this model is clarified/demonstrated by various examples. It is concluded that the described model of dependency is a useful tool for solving a variety of practical problems concerning the probability of joint events and event sequences where common cause and time-dependent failure mechanisms are involved. (Auth.)

  3. Use of thermodynamic calculation to predict the effect of Si on the ageing behavior of Al-Mg-Si-Cu alloys

    International Nuclear Information System (INIS)

    Ji, Yanli; Zhong, Hao; Hu, Ping; Guo, Fuan

    2011-01-01

    Research highlights: → Thermodynamic calculation can predict the ageing behavior of 6xxx alloys. → The hardness level of the alloys depends on the Si content in as-quenched matrix. → The precipitation strengthening effect depends on the Mg 2 Si level of the alloys. -- Abstract: Thermodynamic calculation was employed to predict the influence of Si content on the ageing behavior of Al-Mg-Si-Cu alloys. In addition, experiments were carried out to verify the predictions. The results show that thermodynamic calculation can predict the effect of Si content on the ageing behavior of the studied alloys. This study further proposes that the hardness level of alloys during ageing is directly related to the Si content in the as-quenched supersaturated solution, while the precipitation strengthening effect is directly related to the Mg 2 Si level of the alloys.

  4. Environmental conditions can modulate the links among oxidative stress, age, and longevity.

    Science.gov (United States)

    Marasco, Valeria; Stier, Antoine; Boner, Winnie; Griffiths, Kate; Heidinger, Britt; Monaghan, Pat

    2017-06-01

    Understanding the links between environmental conditions and longevity remains a major focus in biological research. We examined within-individual changes between early- and mid-adulthood in the circulating levels of four oxidative stress markers linked to ageing, using zebra finches (Taeniopygia guttata): a DNA damage product (8-hydroxy-2'-deoxyguanosine; 8-OHdG), protein carbonyls (PC), non-enzymatic antioxidant capacity (OXY), and superoxide dismutase activity (SOD). We further examined whether such within-individual changes differed among birds living under control (ad lib food) or more challenging environmental conditions (unpredictable food availability), having previously found that the latter increased corticosterone levels when food was absent but improved survival over a three year period. Our key findings were: (i) 8-OHdG and PC increased with age in both environments, with a higher increase in 8-OHdG in the challenging environment; (ii) SOD increased with age in the controls but not in the challenged birds, while the opposite was true for OXY; (iii) control birds with high levels of 8-OHdG died at a younger age, but this was not the case in challenged birds. Our data clearly show that while exposure to the potentially damaging effects of oxidative stress increases with age, environmental conditions can modulate the pace of this age-related change. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A review of expert judgement and treatment of probability in SR 97

    International Nuclear Information System (INIS)

    Wilmot, R.D.; Crawford, M.B.

    2000-01-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) recently published its latest performance assessment for deep disposal of spent nuclear fuel, based on the KBS-3 concept. This assessment, SR 97, uses three hypothetical repository sites (known as Aberg, Beberg and Ceberg) to provide a range of geological settings and hydrogeological conditions for the assessment. The long-term performance of these sites is compared for several sets of assumptions relating to canister lifetimes, climate evolution, and patterns of human behaviour. This report is a review of SR 97 conducted by Galson Sciences Ltd on behalf of the Swedish Nuclear Power Inspectorate (SKI). The review focussed on the use of expert judgement in the assessment and on the treatment of uncertainty and the use of probability in assessment calculations. The review of SR 97 concluded that SKB had identified many of the judgements made in developing and implementing the assessment and modelling approaches, but that a more formal documentation of the assumptions involved would add to the clarity and transparency of the use of judgements. Similarly, explicit acknowledgement of the basis for making judgements about the treatment of FEPs would improve confidence in the assessment. There are a number of tools that can be useful in justifying the judgements made in an assessment. The review concluded that more use of dialogue with stake holders, peer review and expert elicitation could all be of value in SKB's assessment programme. Recently introduced regulations in Sweden have established an individual risk criterion for the long-term performance of repositories. SKB has previously identified 'pessimistic' and 'reasonable' values for a number of model parameters, and used these in a range of deterministic calculations to calculate dose and to illustrate system performance. To allow for the calculation of risk, SKB introduced probabilistic analyses into the SR 97 assessment by assigning probabilities of 10

  6. Parallel computational in nuclear group constant calculation

    International Nuclear Information System (INIS)

    Su'ud, Zaki; Rustandi, Yaddi K.; Kurniadi, Rizal

    2002-01-01

    In this paper parallel computational method in nuclear group constant calculation using collision probability method will be discuss. The main focus is on the calculation of collision matrix which need large amount of computational time. The geometry treated here is concentric cylinder. The calculation of collision probability matrix is carried out using semi analytic method using Beckley Naylor Function. To accelerate computation speed some computer parallel used to solve the problem. We used LINUX based parallelization using PVM software with C or fortran language. While in windows based we used socket programming using DELPHI or C builder. The calculation results shows the important of optimal weight for each processor in case there area many type of processor speed

  7. Neutron Flux Interpolation with Finite Element Method in the Nuclear Fuel Cell Calculation using Collision Probability Method

    International Nuclear Information System (INIS)

    Shafii, M. Ali; Su'ud, Zaki; Waris, Abdul; Kurniasih, Neny; Ariani, Menik; Yulianti, Yanti

    2010-01-01

    Nuclear reactor design and analysis of next-generation reactors require a comprehensive computing which is better to be executed in a high performance computing. Flat flux (FF) approach is a common approach in solving an integral transport equation with collision probability (CP) method. In fact, the neutron flux distribution is not flat, even though the neutron cross section is assumed to be equal in all regions and the neutron source is uniform throughout the nuclear fuel cell. In non-flat flux (NFF) approach, the distribution of neutrons in each region will be different depending on the desired interpolation model selection. In this study, the linear interpolation using Finite Element Method (FEM) has been carried out to be treated the neutron distribution. The CP method is compatible to solve the neutron transport equation for cylindrical geometry, because the angle integration can be done analytically. Distribution of neutrons in each region of can be explained by the NFF approach with FEM and the calculation results are in a good agreement with the result from the SRAC code. In this study, the effects of the mesh on the k eff and other parameters are investigated.

  8. Improved Membership Probability for Moving Groups: Bayesian and Machine Learning Approaches

    Science.gov (United States)

    Lee, Jinhee; Song, Inseok

    2018-01-01

    Gravitationally unbound loose stellar associations (i.e., young nearby moving groups: moving groups hereafter) have been intensively explored because they are important in planet and disk formation studies, exoplanet imaging, and age calibration. Among the many efforts devoted to the search for moving group members, a Bayesian approach (e.g.,using the code BANYAN) has become popular recently because of the many advantages it offers. However, the resultant membership probability needs to be carefully adopted because of its sensitive dependence on input models. In this study, we have developed an improved membership calculation tool focusing on the beta-Pic moving group. We made three improvements for building models used in BANYAN II: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZUVW. Our improved tool can change membership probability up to 70%. Membership probability is critical and must be better defined. For example, our code identifies only one third of the candidate members in SIMBAD that are believed to be kinematically associated with beta-Pic moving group.Additionally, we performed cluster analysis of young nearby stars using an unsupervised machine learning approach. As more moving groups and their members are identified, the complexity and ambiguity in moving group configuration has been increased. To clarify this issue, we analyzed ~4,000 X-ray bright young stellar candidates. Here, we present the preliminary results. By re-identifying moving groups with the least human intervention, we expect to understand the composition of the solar neighborhood. Moreover better defined moving group membership will help us understand star formation and evolution in relatively low density environments; especially for the low-mass stars which will be identified in the coming Gaia release.

  9. Integrated failure probability estimation based on structural integrity analysis and failure data: Natural gas pipeline case

    International Nuclear Information System (INIS)

    Dundulis, Gintautas; Žutautaitė, Inga; Janulionis, Remigijus; Ušpuras, Eugenijus; Rimkevičius, Sigitas; Eid, Mohamed

    2016-01-01

    In this paper, the authors present an approach as an overall framework for the estimation of the failure probability of pipelines based on: the results of the deterministic-probabilistic structural integrity analysis (taking into account loads, material properties, geometry, boundary conditions, crack size, and defected zone thickness), the corrosion rate, the number of defects and failure data (involved into the model via application of Bayesian method). The proposed approach is applied to estimate the failure probability of a selected part of the Lithuanian natural gas transmission network. The presented approach for the estimation of integrated failure probability is a combination of several different analyses allowing us to obtain: the critical crack's length and depth, the failure probability of the defected zone thickness, dependency of the failure probability on the age of the natural gas transmission pipeline. A model's uncertainty analysis and uncertainty propagation analysis are performed, as well. - Highlights: • Degradation mechanisms of natural gas transmission pipelines. • Fracture mechanic analysis of the pipe with crack. • Stress evaluation of the pipe with critical crack. • Deterministic-probabilistic structural integrity analysis of gas pipeline. • Integrated estimation of pipeline failure probability by Bayesian method.

  10. CHRONIC MEDICAL CONDITIONS AND REPRODUCIBILITY OF SELF-REPORTED AGE AT MENOPAUSE AMONG COMMUNITY DWELLING WOMEN

    Science.gov (United States)

    de Vries, Heather F.; Northington, Gina M.; Kaye, Elise M.; Bogner, Hillary R.

    2011-01-01

    OBJECTIVE To examine the association between chronic medical conditions and reproducibility of self-reported age at menopause among community-dwelling women. METHOD Age at menopause was assessed in a population-based longitudinal survey of 240 women twice, in 1993 and 2004. Women who recalled age at menopause in 2004 within one year or less of the age at menopause recalled in 1993 (concordant) were compared with women who did not recall of age at menopause in 2004 within 1 year of age at menopause recalled in 1993 (discordant). Type of menopause (surgical or natural) and chronic medical conditions were assessed by self-report. RESULTS One hundred and forty three women (59.6%) reported surgical menopause and 97 (40.4%) reported natural menopause. In all, 130 (54.2%) of women recalled age at menopause in 2004 within one year or less of recalled age at menopause in 1994 while 110 (45.8%) women did not recall age at menopause in 2004 within one year or less of recalled age at menopause in 1994. Among women with surgical menopause, women with three or more medical conditions were less likely to have concordant recall of age at menopause than women with less than three chronic medical conditions (adjusted odds ratio (OR) = 0.36, 95% confidence interval (CI) [0.15, 0.91]) in multivariate models controlling for potentially influential characteristics including cognition and years from menopause. CONCLUSIONS Among women who underwent surgical menopause, the presence of three or more medical conditions is associated with decreased reproducibility of self-reported age at menopause. PMID:21971208

  11. Chronic medical conditions and reproducibility of self-reported age at menopause among community-dwelling women.

    Science.gov (United States)

    de Vries, Heather F; Northington, Gina M; Kaye, Elise M; Bogner, Hillary R

    2011-12-01

    The aim of this study was to examine the association between chronic medical conditions and reproducibility of self-reported age at menopause among community-dwelling women. Age at menopause was assessed in a population-based longitudinal survey of 240 women twice, in 1993 and 2004. Women who recalled age at menopause in 2004 within 1 year or less of age at menopause recalled in 1993 (concordant) were compared with women who did not recall age at menopause in 2004 within 1 year of age at menopause recalled in 1993 (discordant). Type of menopause (surgical or natural) and chronic medical conditions were assessed by self-report. One hundred forty-three women (59.6%) reported surgical menopause, and 97 (40.4%) reported natural menopause. In all, 130 (54.2%) women recalled age at menopause in 2004 within 1 year or less of recalled age at menopause in 1994, whereas 110 (45.8%) women did not recall age at menopause in 2004 within 1 year or less of recalled age at menopause in 1994. Among the women with surgical menopause, the women with three or more medical conditions were less likely to have concordant recall of age at menopause than the women with less than three chronic medical conditions (adjusted odds ratio, 0.36; 95% CI, 0.15-0.91) in multivariate models controlling for potentially influential characteristics including cognition and years since menopause. Among women who underwent surgical menopause, the presence of three or more medical conditions is associated with decreased reproducibility of self-reported age at menopause.

  12. ZUT, Resonance Integrals in Resolved Region at Various Temperature, Escape Probability Calculation

    International Nuclear Information System (INIS)

    Kuncir, G.F.

    1984-01-01

    1 - Nature of physical problem solved: ZUT computes resonance integrals from resonance parameters for a wide variety of temperatures, compositions, and geometries for the resolved resonances. 2 - Method of solution: The form used permits specification of escape probability as a function of the lump dimension and the mean free path. The absorber term may be treated by the integral method, the narrow resonance or the infinite mass approximation. Moderator terms may be represented either by the full integral method (IM) or the asymptotic (NR) form

  13. Probability of Neutralization Estimation for APR1400 Physical Protection System Design Effectiveness Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myungsu; Lim, Heoksoon; Na, Janghwan; Chi, Moongoo [Korea Hydro and Nuclear Power Co. Ltd. Central Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    It is focusing on development of a new designing process which can be compatible to international standards such as IAEA1 and NRC2 suggest. Evaluation for the design effectiveness was found as one of the areas to improve. If a design doesn't meet a certain level of effectiveness, it should be re-designed accordingly. The effectiveness can be calculated with combination of probability of Interruption and probability of neutralization. System Analysis of Vulnerability to Intrusion (SAVI) has been developed by Sandia National Laboratories for that purpose. With SNL's timely detection methodology, SAVI has been used by U.S. nuclear utilities to meet the NRC requirements for PPS design effectiveness evaluation. For the SAVI calculation, probability of neutralization is a vital input element that must be supplied. This paper describes the elements to consider for neutralization, probability estimation methodology, and the estimation for APR1400 PPS design effectiveness evaluation process. Markov chain and Monte Carlo simulation are often used for simple numerical calculation to estimate P{sub N}. The results from both methods are not always identical even for the same situation. P{sub N} values for APR1400 evaluation were calculated based on Markov chain method and modified to be applicable for guards/adversaries ratio based analysis.

  14. Thermodynamic calculations of AuxAg1−x – Fluid equilibria and their applications for ore-forming conditions

    International Nuclear Information System (INIS)

    Liang, Y.; Hoshino, K.

    2015-01-01

    Highlights: • Solubilities of Au–Ag solid solutions are calculated at wide conditions. • Ratios of dissolved Au and Ag depend only on pH at intermediate pH. • Fluid conditions for high gold finenesses have been examined. - Abstract: Concentrations of dissolved gold and silver species in hydrothermal fluids equilibrated with Au–Ag solid solutions have been calculated at wide conditions on the well known fO 2 –pH spaces. Ratios of the total concentrations of dissolved gold and silver species (∑Au/∑Ag) are higher as pH higher and fO 2 lower. The ratios are constant at very low and high pH conditions where major dissolved species of both gold and silver are chloride complexes and thio complexes, respectively, while the ratios practically depend only on pH at intermediate pH conditions where Au(HS) 2 − and AgCl 2 − are major. The calculated results indicate that the solid solutions of high gold finenesses may precipitate from the fluids of low ratios of the total concentrations of dissolved gold and silver species when the conditions are (1) low pH’s and/or (2) high concentration ratios of dissolved chlorine and sulfur and/or (3) high temperatures

  15. Survival and compound nucleus probability of super heavy element Z = 117

    International Nuclear Information System (INIS)

    Manjunatha, H.C.; Sridhar, K.N.

    2017-01-01

    As a part of a systematic study for predicting the most suitable projectile-target combinations for heavy-ion fusion experiments in the synthesis of "2"8"9"-"2"9"7Ts, we have calculated the transmission probability (T_l), compound nucleus formation probabilities (P_C_N) and survival probability (P_s_u_r) of possible projectile-target combinations. We have also studied the fusion cross section, survival cross section and fission cross sections for different projectile-target combination of "2"8"9"-"2"9"7Ts. These theoretical parameters are required before the synthesis of the super heavy element. The calculated probabilities and cross sections show that the production of isotopes of the super heavy element with Z = 117 is strongly dependent on the reaction systems. The most probable reactions to synthetize the super heavy nuclei "2"8"9"-"2"9"7Ts are worked out and listed explicitly. We have also studied the variation of P_C_N and P_s_u_r with the mass number of projectile and target nuclei. This work is useful in the synthesis of the super heavy element Z = 117. (orig.)

  16. A technique of including the effect of aging of passive components in probabilistic risk assessments

    International Nuclear Information System (INIS)

    Phillips, J.H.; Weidenhamer, G.H.

    1992-01-01

    The probabilistic risk assessments (PRAS) being developed at most nuclear power plants to calculate the risk of core damage generally focus on the possible failure of active components. The possible failure of passive components is given little consideration. We are developing methods for selecting risk-significant passive components and including them in PRAS. These methods provide effective ways to prioritize passive components for inspection, and where inspection reveals aging damage, mitigation or repair can be employed to reduce the likelihood of component failure. We demonstrated a method by selecting a weld in the auxiliary feedwater (AFW) system, basing our selection on expert judgement of the likelihood of failure and on an estimate of the consequence of component failure to plant safety. We then modified and used the Piping Reliability Analysis Including Seismic Events (PRAISE) computer code to perform a probabilistic structural analysis to calculate the probability that crack growth due to aging would cause the weld to fail. The PRAISE code was modified to include the effects of changing design material properties with age and changing stress cycles. The calculation included the effects of mechanical loads and thermal transients typical of the service loads for this piping design and the effects of thermal cycling caused by a leaking check valve. However, this particular calculation showed little change in low component failure probability and plant risk for 48 years of service. However, sensitivity studies showed that if the probability of component failure is high, the effect on plant risk is significant. The success of this demonstration shows that this method could be applied to nuclear power plants. The demonstration showed the method is too involved (PRAISE takes a long time to perform the calculation and the input information is extensive) for handling a large number of passive components and therefore simpler methods are needed

  17. Effects of Word Frequency and Transitional Probability on Word Reading Durations of Younger and Older Speakers.

    Science.gov (United States)

    Moers, Cornelia; Meyer, Antje; Janse, Esther

    2017-06-01

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.

  18. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    International Nuclear Information System (INIS)

    Shafii, Mohammad Ali; Meidianti, Rahma; Wildian,; Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-01-01

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation

  19. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    Energy Technology Data Exchange (ETDEWEB)

    Shafii, Mohammad Ali, E-mail: mashafii@fmipa.unand.ac.id; Meidianti, Rahma, E-mail: mashafii@fmipa.unand.ac.id; Wildian,, E-mail: mashafii@fmipa.unand.ac.id; Fitriyani, Dian, E-mail: mashafii@fmipa.unand.ac.id [Department of Physics, Andalas University Padang West Sumatera Indonesia (Indonesia); Tongkukut, Seni H. J. [Department of Physics, Sam Ratulangi University Manado North Sulawesi Indonesia (Indonesia); Arkundato, Artoto [Department of Physics, Jember University Jember East Java Indonesia (Indonesia)

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  20. Probability of fracture and life extension estimate of the high-flux isotope reactor vessel

    International Nuclear Information System (INIS)

    Chang, S.J.

    1998-01-01

    The state of the vessel steel embrittlement as a result of neutron irradiation can be measured by its increase in ductile-brittle transition temperature (DBTT) for fracture, often denoted by RT NDT for carbon steel. This transition temperature can be calibrated by the drop-weight test and, sometimes, by the Charpy impact test. The life extension for the high-flux isotope reactor (HFIR) vessel is calculated by using the method of fracture mechanics that is incorporated with the effect of the DBTT change. The failure probability of the HFIR vessel is limited as the life of the vessel by the reactor core melt probability of 10 -4 . The operating safety of the reactor is ensured by periodic hydrostatic pressure test (hydrotest). The hydrotest is performed in order to determine a safe vessel static pressure. The fracture probability as a result of the hydrostatic pressure test is calculated and is used to determine the life of the vessel. Failure to perform hydrotest imposes the limit on the life of the vessel. The conventional method of fracture probability calculations such as that used by the NRC-sponsored PRAISE CODE and the FAVOR CODE developed in this Laboratory are based on the Monte Carlo simulation. Heavy computations are required. An alternative method of fracture probability calculation by direct probability integration is developed in this paper. The present approach offers simple and expedient ways to obtain numerical results without losing any generality. In this paper, numerical results on (1) the probability of vessel fracture, (2) the hydrotest time interval, and (3) the hydrotest pressure as a result of the DBTT increase are obtained

  1. Laboratory Measurements of Biomass Cook-stove Emissions Aged in an Oxidation Flow Reactor: Influence of Combustion and Aging Conditions on Aerosols

    Science.gov (United States)

    Grieshop, A. P.; Reece, S. M.; Sinha, A.; Wathore, R.

    2016-12-01

    Combustion in rudimentary and improved cook-stoves used by billions in developing countries can be a regionally dominant contributor to black carbon (BC), primary organic aerosols (POA) and precursors for secondary organic aerosol (SOA). Recent studies suggest that SOA formed during photo-oxidation of primary emissions from biomass burning may make important contribution to its atmospheric impacts. However, the extent to which stove type and operating conditions affect the amount, composition and characteristics of SOA formed from the aging of cookstoves emissions is still largely undetermined. Here we present results from experiments with a field portable oxidation flow reactor (F-OFR) designed to assess aging of cook-stove emissions in both laboratory and field settings. Laboratory tests results are used to compare the quantity and properties of fresh and aged emissions from a traditional open fire and twp alternative stove designs operated on the standard and alternate testing protocols. Diluted cookstove emissions were exposed to a range of oxidant concentrations in the F-OFR. Primary emissions were aged both on-line, to study the influence of combustion variability, and sampled from batched emissions in a smog chamber to examine different aging conditions. Data from real-time particle- and gas-phase instruments and integrated filter samples were collected up and down stream of the OFR. The properties of primary emissions vary strongly with stove type and combustion conditions (e.g. smoldering versus flaming). Experiments aging diluted biomass emissions from distinct phases of stove operation (smoldering and flaming) showed peak SOA production for both phases occurred between 3 and 6 equivalent days of aging with slightly greater production observed in flaming phase emissions. Changing combustion conditions had a stronger influence than aging on POA+SOA `emission factors'. Aerosol Chemical Speciation Monitor data show a substantial evolution of aerosol

  2. Probability of collective excited state decay

    International Nuclear Information System (INIS)

    Manykin, Eh.A.; Ozhovan, M.I.; Poluehktov, P.P.

    1987-01-01

    Decay mechanisms of condensed excited state formed of highly excited (Rydberg) atoms are considered, i.e. stability of so-called Rydberg substance is analyzed. It is shown that Auger recombination and radiation transitions are the basic processes. The corresponding probabilities are calculated and compared. It is ascertained that the ''Rydberg substance'' possesses macroscopic lifetime (several seconds) and in a sense it is metastable

  3. The reduced transition probabilities for excited states of rare-earths and actinide even-even nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Ghumman, S. S. [Department of Physics, Sant Longowal Institute of Engineering and Technology (Deemed University), Longowal, Sangrur-148106, Punjab, India s-ghumman@yahoo.com (India)

    2015-08-28

    The theoretical B(E2) ratios have been calculated on DF, DR and Krutov models. A simple method based on the work of Arima and Iachello is used to calculate the reduced transition probabilities within SU(3) limit of IBA-I framework. The reduced E2 transition probabilities from second excited states of rare-earths and actinide even–even nuclei calculated from experimental energies and intensities from recent data, have been found to compare better with those calculated on the Krutov model and the SU(3) limit of IBA than the DR and DF models.

  4. 75 FR 70092 - Special Conditions: Bombardier Inc. Model CL-600-2E25 Airplane, Interaction of Systems and...

    Science.gov (United States)

    2010-11-17

    ... factors of safety) for failure conditions, as a function of system reliability. The Administrator... in the flight manual (speed limitations or avoidance of severe weather conditions, for example). (3... failure-annunciation-system reliability must be included in probability calculations for paragraph (d)(2...

  5. Technical evaluation on high aging, and performance conditions on long-term conservation program

    International Nuclear Information System (INIS)

    Yamashita, Atsushi

    2001-01-01

    In order to secure safety and safe operation of power plants, in every nuclear power plants, conservation actions based on preventive conservation are performed. They contain operative condition monitoring, patrolling inspection, and periodical tests on important systems and apparatus by operators under plant operation and condition monitoring by maintenance workers, and so on, and when finding out their abnormal conditions, their detailed survey is performed to adopt adequate countermeasures such as recovery, exchange, and so on. And, to equipments for nuclear power generation periodical conditions were obliged by legal examinations and by independent inspections. As a result of these conservation actions, even on a plant elapsed about 30 years since beginning of its operation it was thought that the plant was aged with elapsing time even if not recognizing any indication on its aged deterioration at that time. Therefore, for its concrete countermeasure, by supposing long-term operation of a plant with longer operation history, some technical evaluation on aged phenomena were carried out, to investigate on reflection of the obtained results to present conservation actions. Here were described on efforts on the high aging countermeasures, and performing conditions of long-term conservation in the Tsuruga Unit No. 1 Nuclear Power Station. (G.K.)

  6. Overview of input parameters for calculation of the probability of a brittle fracture of the reactor pressure vessel

    International Nuclear Information System (INIS)

    Horacek, L.

    1994-12-01

    The parameters are summarized for a calculation of the probability of brittle fracture of the WWER-440 reactor pressure vessel (RPV). The parameters were selected for 2 basic approaches, viz., one based on the Monte Carlo method and the other on the FORM and SORM methods (First and Second Order Reliability Methods). The approaches were represented by US computer codes VISA-II and OCA-P and by the German ZERBERUS code. The philosophy of the deterministic and probabilistic aspects of the VISA-II code is outlined, and the differences between the US and Czech PWR's are discussed in this context. Briefly described is the partial approach to the evaluation of the WWER type RPV's based on the assessment of their resistance to brittle fracture by fracture mechanics tools and by using the FORM and SORM methods. Attention is paid to the input data for the WWER modification of the VISA-II code. The data are categorized with respect to randomness, i.e. to the stochastic or deterministic nature of their behavior. 18 tabs., 14 refs

  7. Preliminary topical report on comparison reactor disassembly calculations

    International Nuclear Information System (INIS)

    McLaughlin, T.P.

    1975-11-01

    Preliminary results of comparison disassembly calculations for a representative LMFBR model (2100-l voided core) and arbitrary accident conditions are described. The analytical methods employed were the computer programs: FX2-POOL, PAD, and VENUS-II. The calculated fission energy depositions are in good agreement, as are measures of the destructive potential of the excursions, kinetic energy, and work. However, in some cases the resulting fuel temperatures are substantially divergent. Differences in the fission energy deposition appear to be attributable to residual inconsistencies in specifying the comparison cases. In contrast, temperature discrepancies probably stem from basic differences in the energy partition models inherent in the codes. Although explanations of the discrepancies are being pursued, the preliminary results indicate that all three computational methods provide a consistent, global characterization of the contrived disassembly accident

  8. Using the Reliability Theory for Assessing the Decision Confidence Probability for Comparative Life Cycle Assessments.

    Science.gov (United States)

    Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2016-03-01

    Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.

  9. Reliability calculation of cracked components using probabilistic fracture mechanics and a Markovian approach

    International Nuclear Information System (INIS)

    Schmidt, T.

    1988-01-01

    The numerical reliability calculation of cracked construction components under cyclical fatigue stress can be done with the help of models of probabilistic fracture mechanics. An alternative to the Monte Carlo simulation method is examined; the alternative method is based on the description of failure processes with the help of a Markov process. The Markov method is traced back directly to the stochastic parameters of a two-dimensional fracture mechanics model, the effects of inspections and repairs also being considered. The probability of failure and expected failure frequency can be determined as time functions with the transition and conditional probabilities of the original or derived Markov process. For concrete calculation, an approximative Markov chain is designed which, under certain conditions, is capable of giving a sufficient approximation of the original Markov process and the reliability characteristics determined by it. The application of the MARKOV program code developed into an algorithm reveals sufficient conformity with the Monte Carlo reference results. The starting point of the investigation was the 'Deutsche Risikostudie B (DWR)' ('German Risk Study B (DWR)'), specifically, the reliability of the main coolant line. (orig./HP) [de

  10. Development of a clinical prediction model to calculate patient life expectancy: the measure of actuarial life expectancy (MALE).

    Science.gov (United States)

    Clarke, M G; Kennedy, K P; MacDonagh, R P

    2009-01-01

    To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.

  11. Susceptibility and resilience to memory aging stereotypes: education matters more than age.

    Science.gov (United States)

    Andreoletti, Carrie; Lachman, Margie E

    2004-01-01

    The authors examined whether the memory performance of young, middle-aged, and older adults would be influenced by stereotype versus counterstereotype information about age differences on a memory task. One hundred forty-nine adults from a probability sample were randomly assigned to a control group or to age-stereotype conditions. As predicted, counterstereotype information was related to higher recall compared to stereotype and control groups. This was true across all age groups, but only for those with more education. Both stereotype and counterstereotype information were related to lower recall compared to the control group across age groups for those with lower education. Results suggest those with more education are more resilient when faced with negative age stereotypes about memory and respond positively to counterstereotype information. In contrast, those with less education show greater susceptibility to the detrimental effects of age stereotypes and respond negatively to both stereotype and counterstereotype information about memory aging.

  12. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  13. ACCEPTABILITY EVALUATION FOR USING ICRP TISSUE WEIGHTING FACTORS TO CALCULATE EFFECTIVE DOSE VALUE FOR SEPARATE GENDER-AGE GROUPS OF RUSSIAN FEDERATION

    Directory of Open Access Journals (Sweden)

    L. V. Repin

    2013-01-01

    Full Text Available An article describes radiation risk factors for several gender-age population groups according to Russian statistical and medical-demographic data, evaluates the lethality rate for separate nosologic forms of malignant neoplasms based on Russian cancer registries according to the method of the International Agency for Cancer Research. Relative damage factors are calculated for the gender-age groups under consideration. The tissue weighting factors recommended by ICRP to calculate effective doses are compared with relative damage factors calculated by ICRP for the nominal population and with similar factors calculated in this work for separate population cohorts in theRussian Federation. The significance of differences and the feasibility of using tissue weighting factors adapted for the Russian population in assessing population risks in cohorts of different gender-age compositions have been assessed.

  14. Probability of Survival Decision Aid (PSDA)

    National Research Council Canada - National Science Library

    Xu, Xiaojiang; Amin, Mitesh; Santee, William R

    2008-01-01

    A Probability of Survival Decision Aid (PSDA) is developed to predict survival time for hypothermia and dehydration during prolonged exposure at sea in both air and water for a wide range of environmental conditions...

  15. Calculation of radiation and pair production probabilities at arbitrary incidence angles to crystal planes

    International Nuclear Information System (INIS)

    Tikhonirov, V.V.

    1993-01-01

    The results of calculations of the intensity and polarization of radiation from channeled and unchanneled e +- are presented. The Fourier transformation (FT) is used to calculate numerous matrix elements. The calculations for channeled e + showed fast approach of spectral intensity to its value calculated in the approximation of self-consistent field (ASCF) with growing photon energy. In the case of 150 GeV unchanneled e - in Ge at T=293 K the ASCF gives a significantly higher value as compared to the FT. 4 refs., 3 figs

  16. Flux-probability distributions from the master equation for radiation transport in stochastic media

    International Nuclear Information System (INIS)

    Franke, Brian C.; Prinja, Anil K.

    2011-01-01

    We present numerical investigations into the accuracy of approximations in the master equation for radiation transport in discrete binary random media. Our solutions of the master equation yield probability distributions of particle flux at each element of phase space. We employ the Levermore-Pomraning interface closure and evaluate the effectiveness of closures for the joint conditional flux distribution for estimating scattering integrals. We propose a parameterized model for this joint-pdf closure, varying between correlation neglect and a full-correlation model. The closure is evaluated for a variety of parameter settings. Comparisons are made with benchmark results obtained through suites of fixed-geometry realizations of random media in rod problems. All calculations are performed using Monte Carlo techniques. Accuracy of the approximations in the master equation is assessed by examining the probability distributions for reflection and transmission and by evaluating the moments of the pdfs. The results suggest the correlation-neglect setting in our model performs best and shows improved agreement in the atomic-mix limit. (author)

  17. On Farmer's line, probability density functions, and overall risk

    International Nuclear Information System (INIS)

    Munera, H.A.; Yadigaroglu, G.

    1986-01-01

    Limit lines used to define quantitative probabilistic safety goals can be categorized according to whether they are based on discrete pairs of event sequences and associated probabilities, on probability density functions (pdf's), or on complementary cumulative density functions (CCDFs). In particular, the concept of the well-known Farmer's line and its subsequent reinterpretations is clarified. It is shown that Farmer's lines are pdf's and, therefore, the overall risk (defined as the expected value of the pdf) that they represent can be easily calculated. It is also shown that the area under Farmer's line is proportional to probability, while the areas under CCDFs are generally proportional to expected value

  18. Random phenomena fundamentals of probability and statistics for engineers

    CERN Document Server

    Ogunnaike, Babatunde A

    2009-01-01

    PreludeApproach PhilosophyFour Basic PrinciplesI FoundationsTwo Motivating ExamplesYield Improvement in a Chemical ProcessQuality Assurance in a Glass Sheet Manufacturing ProcessOutline of a Systematic ApproachRandom Phenomena, Variability, and UncertaintyTwo Extreme Idealizations of Natural PhenomenaRandom Mass PhenomenaIntroducing ProbabilityThe Probabilistic FrameworkII ProbabilityFundamentals of Probability TheoryBuilding BlocksOperationsProbabilityConditional ProbabilityIndependenceRandom Variables and DistributionsDistributionsMathematical ExpectationCharacterizing DistributionsSpecial Derived Probability FunctionsMultidimensional Random VariablesDistributions of Several Random VariablesDistributional Characteristics of Jointly Distributed Random VariablesRandom Variable TransformationsSingle Variable TransformationsBivariate TransformationsGeneral Multivariate TransformationsApplication Case Studies I: ProbabilityMendel and HeredityWorld War II Warship Tactical Response Under AttackIII DistributionsIde...

  19. On the probability of the discovery of comets and the reality of the concentration of perihelia

    International Nuclear Information System (INIS)

    Radzievskij, V.V.

    1979-01-01

    A high probability of the discovery of comets is discussed from the point of view of Golechek visibility conditions. The Golechek function is calculated for 482 comets with the period P > 1000 years, selected from the Marcden catalogue. A new empiric formula is obtained for the probability of comet discovery depending on the Golechek function and on the bending of orbits. It is shown that the observed concentration of (lambdasub(π)) perihelia longitudes of comet orbits at lambdasub(π)=270 deg can not be a consequence of selection effect. A conclusion is made that the observed concentration of perihelia of comet orbits is real and it may be considered as the most important cosmogonal characteristic. A hypothesis of comet origin can not be perfectly considered without the explanation of this characteristic

  20. Bremsstrahlung emission probability in the α decay of 210Po

    International Nuclear Information System (INIS)

    Boie, Hans-Hermann

    2009-01-01

    A high-statistics measurement of bremsstrahlung emitted in the α decay of 210 Po has been performed. The measured differential emission probabilities, which could be followed up to γ-energies of ∝ 500 keV, allow for the first time for a serious test of various model calculations of the bremsstrahlung accompanied α decay. It is shown that corrections to the α-γ angular correlation due to the interference between the electric dipole and quadrupole amplitudes and due to the relativistic character of the process have to be taken into account. With the experimentally derived angular correlation the measured energydifferential bremsstrahlung emission probabilities show excellent agreement with the fully quantum mechanical calculation. (orig.)

  1. Effect of Age, Hair Type and Body Condition Score on Body ...

    African Journals Online (AJOL)

    The study was conducted to determine the influence of age, hair type and body condition score on body weight and body conformation traits using 62 Yankasa rams. The ages of the rams were categorized into three; 12-18, 19-24 and 25-36 months. The hair types which were determined through touching and feeling were ...

  2. Incidents in nuclear research reactor examined by deterministic probability and probabilistic safety analysis

    International Nuclear Information System (INIS)

    Lopes, Valdir Maciel

    2010-01-01

    This study aims to evaluate the potential risks submitted by the incidents in nuclear research reactors. For its development, two databases of the International Atomic Energy Agency, IAEA, were used, the Incident Report System for Research Reactor and Research Reactor Data Base. For this type of assessment was used the Probabilistic Safety Analysis (PSA), within a confidence level of 90% and the Deterministic Probability Analysis (DPA). To obtain the results of calculations of probabilities for PSA, were used the theory and equations in the paper IAEA TECDOC - 636. The development of the calculations of probabilities for PSA was used the program Scilab version 5.1.1, free access, executable on Windows and Linux platforms. A specific program to get the results of probability was developed within the main program Scilab 5.1.1., for two distributions Fischer and Chi-square, both with the confidence level of 90%. Using the Sordi equations and Origin 6.0 program, were obtained the maximum admissible doses related to satisfy the risk limits established by the International Commission on Radiological Protection, ICRP, and were also obtained these maximum doses graphically (figure 1) resulting from the calculations of probabilities x maximum admissible doses. It was found that the reliability of the results of probability is related to the operational experience (reactor x year and fractions) and that the larger it is, greater the confidence in the outcome. Finally, a suggested list of future work to complement this paper was gathered. (author)

  3. Reliability of structures by using probability and fatigue theories

    International Nuclear Information System (INIS)

    Lee, Ouk Sub; Kim, Dong Hyeok; Park, Yeon Chang

    2008-01-01

    Methodologies to calculate failure probability and to estimate the reliability of fatigue loaded structures are developed. The applicability of the methodologies is evaluated with the help of the fatigue crack growth models suggested by Paris and Walker. The probability theories such as the FORM (first order reliability method), the SORM (second order reliability method) and the MCS (Monte Carlo simulation) are utilized. It is found that the failure probability decreases with the increase of the design fatigue life and the applied minimum stress, the decrease of the initial edge crack size, the applied maximum stress and the slope of Paris equation. Furthermore, according to the sensitivity analysis of random variables, the slope of Pairs equation affects the failure probability dominantly among other random variables in the Paris and the Walker models

  4. Measurement of the resonance escape probability

    International Nuclear Information System (INIS)

    Anthony, J.P.; Bacher, P.; Lheureux, L.; Moreau, J.; Schmitt, A.P.

    1957-01-01

    The average cadmium ratio in natural uranium rods has been measured, using equal diameter natural uranium disks. These values correlated with independent measurements of the lattice buckling, enabled us to calculate values of the resonance escape probability for the G1 reactor with one or the other of two definitions. Measurements were performed on 26 mm and 32 mm rods, giving the following values for the resonance escape probability p: 0.8976 ± 0.005 and 0.912 ± 0.006 (d. 26 mm), 0.8627 ± 0.009 and 0.884 ± 0.01 (d. 32 mm). The influence of either definition on the lattice parameters is discussed, leading to values of the effective integral. Similar experiments have been performed with thorium rods. (author) [fr

  5. Heating calculation features at self-start of large asynchronous motor

    Science.gov (United States)

    Shevchenko, A. A.; Temlyakova, Z. S.; Grechkin, V. V.; Vilberger, M. E.

    2017-10-01

    The article proposes a method for optimizing the incremental heating calculation in the active volume of a large asynchronous motor for certain kinds of load characteristics. The incremental heating calculation is conditioned by the need to determine the aging level of the insulation and to predict a decrease in the electric machine service life. The method for optimizing the incremental heating calculation of asynchronous motor active volume is based on the automation of calculating the heating when simulating the self-starting process of the motor after eliminating an AC drop.

  6. Quantum probabilities of composite events in quantum measurements with multimode states

    International Nuclear Information System (INIS)

    Yukalov, V I; Sornette, D

    2013-01-01

    The problem of defining quantum probabilities of composite events is considered. This problem is of great importance for the theory of quantum measurements and for quantum decision theory, which is a part of measurement theory. We show that the Lüders probability of consecutive measurements is a transition probability between two quantum states and that this probability cannot be treated as a quantum extension of the classical conditional probability. The Wigner distribution is shown to be a weighted transition probability that cannot be accepted as a quantum extension of the classical joint probability. We suggest the definition of quantum joint probabilities by introducing composite events in multichannel measurements. The notion of measurements under uncertainty is defined. We demonstrate that the necessary condition for mode interference is the entanglement of the composite prospect together with the entanglement of the composite statistical state. As an illustration, we consider an example of a quantum game. Special attention is paid to the application of the approach to systems with multimode states, such as atoms, molecules, quantum dots, or trapped Bose-condensed atoms with several coherent modes. (paper)

  7. Calculated and experimental definition of neutron-physical and temperature conditions of material testing in the SM reactor

    International Nuclear Information System (INIS)

    Toporova, V.G.; Pimenov, V.V.

    2004-01-01

    Full text: Reactor material science is one of the main scientific directions of the RIAR activities. Particularly, a wide range of materials and products testing under irradiation is performed in reactor facility SM (RF SM). To solve the tasks specified in the technical specification for an experiment, previously, the test conditions are chosen. At the minimum a space-energy distribution of neutrons and heating rate in the materials under test are important as well as temperature conditions of irradiation. The up-to-date software and libraries of nuclear data allow modeling of neutron-material interaction processes to a considerable degree of details and also obtaining a true neutron distribution by calculation methods. As a result of a great scope of work on verification, a calculation model, developed on the basis of a package of applied software MCU (option MCU-4/SM22) and analogue Monte-Carlo method, is widely used at RIAR. The MCU geometric module makes it possible to model the SM core and reflector in three-dimensional geometry with sufficient accuracy and to describe all elements of the channel structure and irradiation device with specimens. The calculation model of RF SM is tested using the results of activation experiments performed in its critical assembly, geometric parameters and structural materials of which correspond completely with the prototype. The difference in the calculated and experimental values is less than 2.5%. Possibilities of the calculated estimation of operating temperature conditions of absorbing elements under irradiation should be considered separately. As the conducted calculations and their analysis show, to define the fuel column temperature correctly, one needs reliable data on thermal-physical parameters of materials, especially ceramic ones, such as titanium, dysprosium or boron carbide. This is very important for boron carbide-absorbing elements for actually all their operation parameters (such as: gas release, swelling

  8. Domestic wells have high probability of pumping septic tank leachate

    Science.gov (United States)

    Bremer, J. E.; Harter, T.

    2012-08-01

    Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).

  9. Domestic wells have high probability of pumping septic tank leachate

    Directory of Open Access Journals (Sweden)

    J. E. Bremer

    2012-08-01

    Full Text Available Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25–30% of households are served by a septic (onsite wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities, shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens.

  10. Probability theory and statistical applications a profound treatise for self-study

    CERN Document Server

    Zörnig, Peter

    2016-01-01

    This accessible and easy-to-read book provides many examples to illustrate diverse topics in probability and statistics, from initial concepts up to advanced calculations. Special attention is devoted e.g. to independency of events, inequalities in probability and functions of random variables. The book is directed to students of mathematics, statistics, engineering, and other quantitative sciences.

  11. Conditional Genealogies and the Age of a Neutral Mutant

    DEFF Research Database (Denmark)

    Wiuf, Carsten; Donelly, P

    1999-01-01

    This paper is concerned with the structure of the genealogy of a sample in which it is observed that some subset of chromosomes carries a particular mutation, assumed to have arisen uniquely in the history of the population. A rigorous theoretical study of this conditional genealogy is given using...... coalescent methods. Particular results include the mean, variance, and density of the age of the mutation conditional on its frequency in the sample. Most of the development relates to populations of constant size, but we discuss the extension to populations which have grown exponentially to their present...

  12. Monte Carlo calculation of the total probability for gamma-Ray interaction in toluene

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Garcia-Torano, E.

    1983-01-01

    Interaction and absorption probabilities for gamma-rays with energies between 1 and 1000 KeV have been computed and tabulated. Toluene based scintillator solution has been assumed in the computation. Both, point sources and homogeneously dispersed radioactive material have been assumed. These tables may be applied to cylinders with radii between 1.25 cm and 0.25 cm and heights between 4.07 cm and 0.20 cm. (Author) 26 refs

  13. Tools and techniques for ageing predictions in nuclear reactors through condition monitoring

    International Nuclear Information System (INIS)

    Verma, R.M.P.

    1994-01-01

    To operate the nuclear reactors beyond their design predicted life is gaining importance because of huge replacement and decommissioning costs. But experience shows that nuclear plant safety and reliability may decline in the later years of plant life due to ageing degradation. Ageing of nuclear plant components, structures and systems, if unmitigated reduces their safety margins provided in the design and thus increases risks to public health and safety. These safety margins must be monitored throughout plant service life including any extended life. Condition monitoring of nuclear reactor components/equipment and systems can be done to study the effect of ageing, status of safety margins and effect of corrective and mitigating actions taken. The tools and techniques of condition monitoring are also important in failure trending, predictive maintenance, evaluation of scheduled maintenance, in mitigation of ageing, life extension and reliability studies. (author). 1 fig., 1 annexure

  14. Crosslinking of SAVY-4000 O-rings as a Function of Aging Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Van Buskirk, Caleb Griffith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-08

    SAVY-4000 containers were developed as a part of DOE M 441.1-1 to protect workers who handle stored nuclear material from exposure due to loss of containment.1 The SAVY-4000 is comprised of three parts: a lid, a container, and a cross-linked fluoropolymer O-ring. Degradation of the O-ring during use could limit the lifetime of the SAVY-4000. In order to quantify the chemical changes of the Oring over time, the molecular weight between crosslinks was determined as a function of aging conditions using a swelling technique. Because the O-ring is a cross-linked polymer, it will absorb solvent into its matrix without dissolving. The relative amount of solvent uptake can be related to the degree of crosslinking using an equation developed by Paul Flory and John Rehner Jr3. This method was used to analyze O-ring samples aged under thermal and ionizing-radiation conditions. It was found that at the harsher thermal gaining conditions in absence of ionizing-radiation the average molecular weight between crosslinks decreased, indicating a rise in crosslinks, which may be attributable to advanced aging with no ionizing radiation present. Inversely, in the presence of ionizing radiation it was found that material has a higher level of cross-linking with age. This information could be used to help predict the lifetime of the O-rings in SAVY-4000 containers under service conditions.

  15. K-shell ionization probability in energetic nearly symmetric heavy-ion collisions

    International Nuclear Information System (INIS)

    Tserruya, I.; Schmidt-Boecking, H.; Schuch, R.

    1977-01-01

    Impact parameter dependent K-x-ray emission probabilities for the projectile and target atoms have been measured in 35 MeV Cl on Cl, Cl on Ti and Cl on Ni collisions. The sum of projectile plus target K-shell ionization probability is taken as a measure of the total 2psigma ionization probability. The 2pπ-2psigma totational coupling model is in clear disagreement with the present results. On the other hand the sum of probabilities is reproduced both in shape and absolute magnitude by the statistical model for inner-shell ionization. The K-shell ionization probability of the higher -Z collision partner is well described by this model including the 2psigma-1ssigma vacancy sharing probability calculated as a function of the impact parameter. (author)

  16. Effects of artificial aging conditions on yttria-stabilized zirconia implant abutments.

    Science.gov (United States)

    Basílio, Mariana de Almeida; Cardoso, Kátia Vieira; Antonio, Selma Gutierrez; Rizkalla, Amin Sami; Santos Junior, Gildo Coelho; Arioli Filho, João Neudenir

    2016-08-01

    Most ceramic abutments are fabricated from yttria-stabilized tetragonal zirconia (Y-TZP). However, Y-TZP undergoes hydrothermal degradation, a process that is not well understood. The purpose of this in vitro study was to assess the effects of artificial aging conditions on the fracture load, phase stability, and surface microstructure of a Y-TZP abutment. Thirty-two prefabricated Y-TZP abutments were screwed and tightened down to external hexagon implants and divided into 4 groups (n = 8): C, control; MC, mechanical cycling (1×10(6) cycles; 10 Hz); AUT, autoclaving (134°C; 5 hours; 0.2 MPa); and TC, thermal cycling (10(4) cycles; 5°/55°C). A single-load-to-fracture test was performed at a crosshead speed of 0.5 mm/min to assess the assembly's resistance to fracture (ISO Norm 14801). X-ray diffraction (XRD) analysis was applied to observe and quantify the tetragonal-monoclinic (t-m) phase transformation. Representative abutments were examined with high-resolution scanning electron microscopy (SEM) to observe the surface characteristics of the abutments. Load-to-fracture test results (N) were compared by ANOVA and Tukey test (α=.05). XRD measurements revealed the monoclinic phase in some abutments after each aging condition. All the aging conditions reduced the fracture load significantly (Paging conditions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  17. The risk of a major nuclear accident: calculation and perception of probabilities

    International Nuclear Information System (INIS)

    Leveque, Francois

    2013-07-01

    The accident at Fukushima Daiichi, Japan, occurred on 11 March 2011. This nuclear disaster, the third on such a scale, left a lasting mark in the minds of hundreds of millions of people. Much as Three Mile Island or Chernobyl, yet another place will be permanently associated with a nuclear power plant which went out of control. Fukushima Daiichi revived the issue of the hazards of civil nuclear power, stirring up all the associated passion and emotion. The whole of this paper is devoted to the risk of a major nuclear accident. By this we mean a failure initiating core meltdown, a situation in which the fuel rods melt and mix with the metal in their cladding. Such accidents are classified as at least level 5 on the International Nuclear Event Scale. The Three Mile Island accident, which occurred in 1979 in the United States, reached this level of severity. The explosion of reactor 4 at the Chernobyl plant in Ukraine in 1986 and the recent accident in Japan were classified as class 7, the highest grade on this logarithmic scale. The main difference between the top two levels and level 5 relates to a significant or major release of radioactive material to the environment. In the event of a level-5 accident, damage is restricted to the inside of the plant, whereas, in the case of level-7 accidents, huge areas of land, above or below the surface, and/or sea may be contaminated. Before the meltdown of reactors 1, 2 and 3 at Fukushima Daiichi, eight major accidents affecting nuclear power plants had occurred worldwide. This is a high figure compared with the one calculated by the experts. Observations in the field do not appear to fit the results of the probabilistic models of nuclear accidents produced since the 1970's. Oddly enough the number of major accidents is closer to the risk as perceived by the general public. In general we tend to overestimate any risk relating to rare, fearsome accidents. What are we to make of this divergence? How are we to reconcile

  18. Changes in opalescence and fluorescence properties of resin composites after accelerated aging.

    Science.gov (United States)

    Lee, Yong-Keun; Lu, Huan; Powers, John M

    2006-07-01

    Opalescence and fluorescence properties and the correlated translucency and masking effect of resin composites may change after aging. The objective of this study was to determine the changes in opalescence and fluorescence properties of resin composites after accelerated aging for 150 kJ/m2. Changes in translucency and masking effect were also determined. Color and spectral distribution of seven resin composites (A2 shade, 1-mm thick) were measured in the reflectance and transmittance modes under ultraviolet light (UV)-included and excluded conditions. Opalescence parameter (OP) was calculated as the difference in yellow-blue (Deltab*) and red-green (Deltaa*) coordinates between the reflected and transmitted colors under UV-included and excluded conditions. For the fluorescence evaluation, color differences (FL-Ref and FL-Trans) by the inclusion or exclusion of the UV-component of the standard illuminant D65 in the reflectance and transmittance modes were calculated. Under UV-included and excluded conditions, the translucency parameter (TP) was calculated, and the masking effect (ME) was calculated as the color difference between a specimen over a black tile and black tile itself. Repeated-measures 2-way analysis of variance at the significance level of 0.05 was performed for the values before and after aging. OP values in UV-included and excluded conditions did not change significantly after aging. FL-Ref and FL-Trans, TP values and ME values in UV-included and excluded conditions changed significantly after aging (pOpalescence of resin composites did not change but fluorescence was not detected after accelerated aging for 150 kJ/m2. Translucency and masking effect changed significantly after aging.

  19. Enhanced taurine release in cell-damaging conditions in the developing and ageing mouse hippocampus.

    Science.gov (United States)

    Saransaari, P; Oja, S S

    1997-08-01

    Taurine has been shown to be essential for neuronal development and survival in the central nervous system. The release of preloaded [3H]taurine was studied in hippocampal slices from seven-day-, three-month- and 18-22-month-old mice in cell-damaging conditions. The slices were superfused in hypoxic, hypoglycemic and ischemic conditions and exposed to free radicals and oxidative stress. The release of taurine was greatly enhanced in the above conditions in all age groups, except in oxidative stress. The release was large in ischemia, particularly in the hippocampus of aged mice. Potassium stimulation was still able to release taurine in cell-damaging conditions in immature mice, whereas in adult and aged animals the release was so substantial that this additional stimulus failed to work. Taurine release was partially Ca2+-dependent in all cases. The massive release of the inhibitory amino acid taurine in ischemic conditions could act neuroprotectively, counteracting in several ways the effects of simultaneous release of excitatory amino acids. This protection could be of great importance in developing brain tissue, while also having an effect in aged brains.

  20. Performance in eyeblink conditioning is age and sex dependent.

    Directory of Open Access Journals (Sweden)

    Karolina Löwgren

    Full Text Available A growing body of evidence suggests that the cerebellum is involved in both cognition and language. Abnormal cerebellar development may contribute to neurodevelopmental disorders such as attention deficit hyperactivity disorder (ADHD, autism, fetal alcohol syndrome, dyslexia, and specific language impairment. Performance in eyeblink conditioning, which depends on the cerebellum, can potentially be used to clarify the neural mechanisms underlying the cerebellar dysfunction in disorders like these. However, we must first understand how the performance develops in children who do not have a disorder. In this study we assessed the performance in eyeblink conditioning in 42 typically developing children between 6 and 11 years old as well as in 26 adults. Older children produced more conditioned eyeblink responses than younger children and adults produced more than children. In addition, females produced more conditioned eyeblink responses than males among both children and adults. These results highlight the importance of considering the influence of age and sex on the performance when studying eyeblink conditioning as a measure of cerebellar development.

  1. Estimated Probability of Traumatic Abdominal Injury During an International Space Station Mission

    Science.gov (United States)

    Lewandowski, Beth E.; Brooker, John E.; Weavr, Aaron S.; Myers, Jerry G., Jr.; McRae, Michael P.

    2013-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to spaceflight mission planners and medical system designers when assessing risks and optimizing medical systems. The IMM project maintains a database of medical conditions that could occur during a spaceflight. The IMM project is in the process of assigning an incidence rate, the associated functional impairment, and a best and a worst case end state for each condition. The purpose of this work was to develop the IMM Abdominal Injury Module (AIM). The AIM calculates an incidence rate of traumatic abdominal injury per person-year of spaceflight on the International Space Station (ISS). The AIM was built so that the probability of traumatic abdominal injury during one year on ISS could be predicted. This result will be incorporated into the IMM Abdominal Injury Clinical Finding Form and used within the parent IMM model.

  2. Time Dependence of Collision Probabilities During Satellite Conjunctions

    Science.gov (United States)

    Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.

    2017-01-01

    The NASA Conjunction Assessment Risk Analysis (CARA) team has recently implemented updated software to calculate the probability of collision (P (sub c)) for Earth-orbiting satellites. The algorithm can employ complex dynamical models for orbital motion, and account for the effects of non-linear trajectories as well as both position and velocity uncertainties. This “3D P (sub c)” method entails computing a 3-dimensional numerical integral for each estimated probability. Our analysis indicates that the 3D method provides several new insights over the traditional “2D P (sub c)” method, even when approximating the orbital motion using the relatively simple Keplerian two-body dynamical model. First, the formulation provides the means to estimate variations in the time derivative of the collision probability, or the probability rate, R (sub c). For close-proximity satellites, such as those orbiting in formations or clusters, R (sub c) variations can show multiple peaks that repeat or blend with one another, providing insight into the ongoing temporal distribution of risk. For single, isolated conjunctions, R (sub c) analysis provides the means to identify and bound the times of peak collision risk. Additionally, analysis of multiple actual archived conjunctions demonstrates that the commonly used “2D P (sub c)” approximation can occasionally provide inaccurate estimates. These include cases in which the 2D method yields negligibly small probabilities (e.g., P (sub c)) is greater than 10 (sup -10)), but the 3D estimates are sufficiently large to prompt increased monitoring or collision mitigation (e.g., P (sub c) is greater than or equal to 10 (sup -5)). Finally, the archive analysis indicates that a relatively efficient calculation can be used to identify which conjunctions will have negligibly small probabilities. This small-P (sub c) screening test can significantly speed the overall risk analysis computation for large numbers of conjunctions.

  3. Multiple-event probability in general-relativistic quantum mechanics

    International Nuclear Information System (INIS)

    Hellmann, Frank; Mondragon, Mauricio; Perez, Alejandro; Rovelli, Carlo

    2007-01-01

    We discuss the definition of quantum probability in the context of 'timeless' general-relativistic quantum mechanics. In particular, we study the probability of sequences of events, or multievent probability. In conventional quantum mechanics this can be obtained by means of the 'wave function collapse' algorithm. We first point out certain difficulties of some natural definitions of multievent probability, including the conditional probability widely considered in the literature. We then observe that multievent probability can be reduced to single-event probability, by taking into account the quantum nature of the measuring apparatus. In fact, by exploiting the von-Neumann freedom of moving the quantum/classical boundary, one can always trade a sequence of noncommuting quantum measurements at different times, with an ensemble of simultaneous commuting measurements on the joint system+apparatus system. This observation permits a formulation of quantum theory based only on single-event probability, where the results of the wave function collapse algorithm can nevertheless be recovered. The discussion also bears on the nature of the quantum collapse

  4. Risk factors of delay proportional probability in diphtheria-tetanus-pertussis vaccination of Iranian children; Life table approach analysis

    Directory of Open Access Journals (Sweden)

    Mohsen Mokhtari

    2015-01-01

    Full Text Available Despite success in expanded program immunization for an increase in vaccination coverage in the children of world, timeliness and schedule of vaccination remains as one of the challenges in public health. This study purposed to demonstrate the related factors of delayed diphtheria-tetanus-pertussis (DTP vaccination using life table approach. A historical cohort study conducted in the poor areas of five large Iran cities. Totally, 3610 children with 24-47 months old age who had documented vaccination card were enrolled. Time of vaccination for the third dose of DTP vaccine was calculated. Life table survival was used to calculate the proportional probability of vaccination in each time. Wilcoxon test was used for the comparison proportional probability of delayed vaccination based on studies factors. The overall median delayed time for DTP3 was 38.52 days. The Wilcoxon test showed that city, nationality, education level of parents, birth order and being in rural areas are related to the high probability of delay time for DTP3 vaccination (P 0.05. Being away from the capital, a high concentration of immigrants in the city borders with a low socioeconomic class leads to prolonged delay in DTP vaccination time. Special attention to these areas is needed to increase the levels of parental knowledge and to facilitate access to the health services care.

  5. Quantum Zeno and anti-Zeno effects measured by transition probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenxian, E-mail: wxzhang@whu.edu.cn [School of Physics and Technology, Wuhan University, Wuhan, Hubei 430072 (China); Department of Optical Science and Engineering, Fudan University, Shanghai 200433 (China); CEMS, RIKEN, Saitama 351-0198 (Japan); Kavli Institute for Theoretical Physics China, CAS, Beijing 100190 (China); Kofman, A.G. [CEMS, RIKEN, Saitama 351-0198 (Japan); Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1040 (United States); Zhuang, Jun [Department of Optical Science and Engineering, Fudan University, Shanghai 200433 (China); You, J.Q. [Beijing Computational Science Research Center, Beijing 10084 (China); Department of Physics, Fudan University, Shanghai 200433 (China); CEMS, RIKEN, Saitama 351-0198 (Japan); Nori, Franco [CEMS, RIKEN, Saitama 351-0198 (Japan); Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1040 (United States)

    2013-10-30

    Using numerical calculations, we compare the transition probabilities of many spins in random magnetic fields, subject to either frequent projective measurements, frequent phase modulations, or a mix of modulations and measurements. For various distribution functions, we find the transition probability under frequent modulations is suppressed most if the pulse delay is short and the evolution time is larger than a critical value. Furthermore, decay freezing occurs only under frequent modulations as the pulse delay approaches zero. In the large pulse-delay region, however, the transition probabilities under frequent modulations are highest among the three control methods.

  6. Initial conditions for slow-roll inflation in a random Gaussian landscape

    Energy Technology Data Exchange (ETDEWEB)

    Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu [Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, MA 02155 (United States)

    2017-07-01

    In the landscape perspective, our Universe begins with a quantum tunneling from an eternally-inflating parent vacuum, followed by a period of slow-roll inflation. We investigate the tunneling process and calculate the probability distribution for the initial conditions and for the number of e-folds of slow-roll inflation, modeling the landscape by a small-field one-dimensional random Gaussian potential. We find that such a landscape is fully consistent with observations, but the probability for future detection of spatial curvature is rather low, P ∼ 10{sup −3}.

  7. Probability & Perception: The Representativeness Heuristic in Action

    Science.gov (United States)

    Lu, Yun; Vasko, Francis J.; Drummond, Trevor J.; Vasko, Lisa E.

    2014-01-01

    If the prospective students of probability lack a background in mathematical proofs, hands-on classroom activities may work well to help them to learn to analyze problems correctly. For example, students may physically roll a die twice to count and compare the frequency of the sequences. Tools such as graphing calculators or Microsoft Excel®…

  8. Stereotypes Associated With Age-related Conditions and Assistive Device Use in Canadian Media.

    Science.gov (United States)

    Fraser, Sarah Anne; Kenyon, Virginia; Lagacé, Martine; Wittich, Walter; Southall, Kenneth Edmund

    2016-12-01

    Newspapers are an important source of information. The discourses within the media can influence public attitudes and support or discourage stereotypical portrayals of older individuals. This study critically examined discourses within a Canadian newspaper in terms of stereotypical depictions of age-related health conditions and assistive technology devices (ATDs). Four years (2009-2013) of Globe and Mail articles were searched for terms relevant to the research question. A total of 65 articles were retained, and a critical discourse analysis (CDA) of the texts was conducted. The articles were coded for stereotypes associated with age-related health conditions and ATDs, consequences of the stereotyping, and context (overall setting or background) of the discourse. The primary code list included 4 contexts, 13 stereotypes, and 9 consequences of stereotyping. CDA revealed discourses relating to (a) maintaining autonomy in a stereotypical world, (b) ATDs as obstacles in employment, (c) barriers to help seeking for age-related conditions, and (d) people in power setting the stage for discrimination. Our findings indicate that discourses in the Canadian media include stereotypes associated with age-related health conditions. Further, depictions of health conditions and ATDs may exacerbate existing stereotypes about older individuals, limit the options available to them, lead to a reduction in help seeking, and lower ATD use. Education about the realities of age-related health changes and ATDs is needed in order to diminish stereotypes and encourage ATD uptake and use. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    Science.gov (United States)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  10. Probability

    CERN Document Server

    Shiryaev, A N

    1996-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

  11. Internal Medicine residents use heuristics to estimate disease probability

    OpenAIRE

    Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin

    2015-01-01

    Background: Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. Method: We randomized 55 In...

  12. Upper Bounds for Ruin Probability with Stochastic Investment Return

    Institute of Scientific and Technical Information of China (English)

    ZHANG Lihong

    2005-01-01

    Risk models with stochastic investment return are widely held in practice, as well as in more challenging research fields. Risk theory is mainly concerned with ruin probability, and a tight bound for ruin probability is the best for practical use. This paper presents a discrete time risk model with stochastic investment return. Conditional expectation properties and martingale inequalities are used to obtain both exponential and non-exponential upper bounds for the ruin probability.

  13. Calculation system for physical analysis of boiling water reactors

    International Nuclear Information System (INIS)

    Bouveret, F.

    2001-01-01

    Although Boiling Water Reactors generate a quarter of worldwide nuclear electricity, they have been only little studied in France. A certain interest now shows up for these reactors. So, the aim of the work presented here is to contribute to determine a core calculation methodology with CEA (Commissariat a l'Energie Atomique) codes. Vapour production in the reactor core involves great differences in technological options from pressurised water reactor. We analyse main physical phenomena for BWR and offer solutions taking them into account. BWR fuel assembly heterogeneity causes steep thermal flux gradients. The two dimensional collision probability method with exact boundary conditions makes possible to calculate accurately the flux in BWR fuel assemblies using the APOLLO-2 lattice code but induces a very long calculation time. So, we determine a new methodology based on a two-level flux calculation. Void fraction variations in assemblies involve big spectrum changes that we have to consider in core calculation. We suggest to use a void history parameter to generate cross-sections libraries for core calculation. The core calculation code has also to calculate the depletion of main isotopes concentrations. A core calculation associating neutronics and thermal-hydraulic codes lays stress on points we still have to study out. The most important of them is to take into account the control blade in the different calculation stages. (author)

  14. A Novel Adaptive Conditional Probability-Based Predicting Model for User’s Personality Traits

    Directory of Open Access Journals (Sweden)

    Mengmeng Wang

    2015-01-01

    Full Text Available With the pervasive increase in social media use, the explosion of users’ generated data provides a potentially very rich source of information, which plays an important role in helping online researchers understand user’s behaviors deeply. Since user’s personality traits are the driving force of user’s behaviors, hence, in this paper, along with social network features, we first extract linguistic features, emotional statistical features, and topic features from user’s Facebook status updates, followed by quantifying importance of features via Kendall correlation coefficient. And then, on the basis of weighted features and dynamic updated thresholds of personality traits, we deploy a novel adaptive conditional probability-based predicting model which considers prior knowledge of correlations between user’s personality traits to predict user’s Big Five personality traits. In the experimental work, we explore the existence of correlations between user’s personality traits which provides a better theoretical support for our proposed method. Moreover, on the same Facebook dataset, compared to other methods, our method can achieve an F1-measure of 80.6% when taking into account correlations between user’s personality traits, and there is an impressive improvement of 5.8% over other approaches.

  15. Wire system aging assessment and condition monitoring (WASCO)

    International Nuclear Information System (INIS)

    Fantoni, P.F.; Nordlund, A.

    2006-04-01

    Nuclear facilities rely on electrical wire systems to perform a variety of functions for successful operation. Many of these functions directly support the safe operation of the facility; therefore, the continued reliability of wire systems, even as they age, is critical. Condition Monitoring (CM) of installed wire systems is an important part of any aging program, both during the first 40 years of the qualified life and even more in anticipation of the license renewal for a nuclear power plant. This report describes a method for wire system condition monitoring, developed at the Halden Reactor Project, which is based on Frequency Domain Reflectometry. This method resulted in the development of a system called LIRA (LIne Resonance Analysis), which can be used on-line to detect any local or global changes in the cable electrical parameters as a consequence of insulation faults or degradation. LIRA is composed of a signal generator, a signal analyser and a simulator that can be used to simulate several failure/degradation scenarios and assess the accuracy and sensitivity of the LIRA system. Chapter 5 of this report describes an complementary approach based on positron measurement techniques, used widely in defect physics due to the high sensitivity to micro defects, in particular open volume defects. This report describes in details these methodologies, the results of field experiments and the proposed future work. (au)

  16. Wire system aging assessment and condition monitoring (WASCO)

    Energy Technology Data Exchange (ETDEWEB)

    Fantoni, P.F. [Institutt for energiteknikk (Norway); Nordlund, A. [Chalmers Univ. of Technology (Sweden)

    2006-04-15

    Nuclear facilities rely on electrical wire systems to perform a variety of functions for successful operation. Many of these functions directly support the safe operation of the facility; therefore, the continued reliability of wire systems, even as they age, is critical. Condition Monitoring (CM) of installed wire systems is an important part of any aging program, both during the first 40 years of the qualified life and even more in anticipation of the license renewal for a nuclear power plant. This report describes a method for wire system condition monitoring, developed at the Halden Reactor Project, which is based on Frequency Domain Reflectometry. This method resulted in the development of a system called LIRA (LIne Resonance Analysis), which can be used on-line to detect any local or global changes in the cable electrical parameters as a consequence of insulation faults or degradation. LIRA is composed of a signal generator, a signal analyser and a simulator that can be used to simulate several failure/degradation scenarios and assess the accuracy and sensitivity of the LIRA system. Chapter 5 of this report describes an complementary approach based on positron measurement techniques, used widely in defect physics due to the high sensitivity to micro defects, in particular open volume defects. This report describes in details these methodologies, the results of field experiments and the proposed future work. (au)

  17. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  18. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  19. Left passage probability of Schramm-Loewner Evolution

    Science.gov (United States)

    Najafi, M. N.

    2013-06-01

    SLE(κ,ρ⃗) is a variant of Schramm-Loewner Evolution (SLE) which describes the curves which are not conformal invariant, but are self-similar due to the presence of some other preferred points on the boundary. In this paper we study the left passage probability (LPP) of SLE(κ,ρ⃗) through field theoretical framework and find the differential equation governing this probability. This equation is numerically solved for the special case κ=2 and hρ=0 in which hρ is the conformal weight of the boundary changing (bcc) operator. It may be referred to loop erased random walk (LERW) and Abelian sandpile model (ASM) with a sink on its boundary. For the curve which starts from ξ0 and conditioned by a change of boundary conditions at x0, we find that this probability depends significantly on the factor x0-ξ0. We also present the perturbative general solution for large x0. As a prototype, we apply this formalism to SLE(κ,κ-6) which governs the curves that start from and end on the real axis.

  20. Analysis of Observation Data of Earth-Rockfill Dam Based on Cloud Probability Distribution Density Algorithm

    Directory of Open Access Journals (Sweden)

    Han Liwei

    2014-07-01

    Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.

  1. Calculation of the Wave Conditions in Nissum Bredning

    DEFF Research Database (Denmark)

    Svendsen, Rasmus; Frigaard, Peter

    For the purpose of determining the optimal position in Nissum Bredning for placement of wave dragon, the wave energy flux in Nissum Bredning has been calculated. It has not been posible to retrieve satisfactory measured wavedata for Nissum Bredning, therefor the calculations are based on the SPM...

  2. PROCOPE, Collision Probability in Pin Clusters and Infinite Rod Lattices

    International Nuclear Information System (INIS)

    Amyot, L.; Daolio, C.; Benoist, P.

    1984-01-01

    1 - Nature of physical problem solved: Calculation of directional collision probabilities in pin clusters and infinite rod lattices. 2 - Method of solution: a) Gauss integration of analytical expressions for collision probabilities. b) alternately, an approximate closed expression (not involving integrals) may be used for pin-to-pin interactions. 3 - Restrictions on the complexity of the problem: number of fuel pins must be smaller than 62; maximum number of groups of symmetry is 300

  3. The probability of containment failure by direct containment heating in Zion

    International Nuclear Information System (INIS)

    Pilch, M.M.; Yan, H.; Theofanous, T.G.

    1994-12-01

    This report is the first step in the resolution of the Direct Containment Heating (DCH) issue for the Zion Nuclear Power Plant using the Risk Oriented Accident Analysis Methodology (ROAAM). This report includes the definition of a probabilistic framework that decomposes the DCH problem into three probability density functions that reflect the most uncertain initial conditions (UO 2 mass, zirconium oxidation fraction, and steel mass). Uncertainties in the initial conditions are significant, but our quantification approach is based on establishing reasonable bounds that are not unnecessarily conservative. To this end, we also make use of the ROAAM ideas of enveloping scenarios and ''splintering.'' Two causal relations (CRs) are used in this framework: CR1 is a model that calculates the peak pressure in the containment as a function of the initial conditions, and CR2 is a model that returns the frequency of containment failure as a function of pressure within the containment. Uncertainty in CR1 is accounted for by the use of two independently developed phenomenological models, the Convection Limited Containment Heating (CLCH) model and the Two-Cell Equilibrium (TCE) model, and by probabilistically distributing the key parameter in both, which is the ratio of the melt entrainment time to the system blowdown time constant. The two phenomenological models have been compared with an extensive database including recent integral simulations at two different physical scales. The containment load distributions do not intersect the containment strength (fragility) curve in any significant way, resulting in containment failure probabilities less than 10 -3 for all scenarios considered. Sensitivity analyses did not show any areas of large sensitivity

  4. Conditioned pain modulation (CPM) in children and adolescents: Effects of sex and age

    Science.gov (United States)

    Tsao, Jennie C. I.; Seidman, Laura C.; Evans, Subhadra; Lung, Kirsten C.; Zeltzer, Lonnie K.; Naliboff, Bruce D.

    2013-01-01

    Conditioned pain modulation (CPM) refers to the diminution of perceived pain intensity for a test stimulus following application of a conditioning stimulus to a remote area of the body, and is thought to reflect the descending inhibition of nociceptive signals. Studying CPM in children may inform interventions to enhance central pain inhibition within a developmental framework. We assessed CPM in 133 healthy children (mean age = 13 years; 52.6% girls) and tested the effects of sex and age. Participants were exposed to four trials of a pressure test stimulus before, during, and after the application of a cold water conditioning stimulus. CPM was documented by a reduction in pressure pain ratings during cold water administration. Older children (12–17 years) exhibited greater CPM than younger (8–11 years) children. No sex differences in CPM were found. Lower heart rate variability (HRV) at baseline and after pain induction was associated with less CPM controlling for child age. The findings of greater CPM in the older age cohort suggest a developmental improvement in central pain inhibitory mechanisms. The results highlight the need to examine developmental and contributory factors in central pain inhibitory mechanisms in children to guide effective, age appropriate, pain interventions. PMID:23541066

  5. Structure of states and reduced probabilities of electromagnetic transitions in 169Yb

    International Nuclear Information System (INIS)

    Bonch-Osmolovskaya, N.A.; Morozov, V.A.; Khudajberdyev, Eh.N.

    1988-01-01

    The effect of accounting the Pauli principle on the structure and energy of nonrotational states of 169 Yb deformed nucleus as well as on reduced probabilities of E2-transitions B(E2) is studied within the framework of the quasiparticle-phonon model (QPM). The amplitudes of states mixing due to Coriolis interaction and reduced probabilities of gamma transition within the framework of nonadiabatic rotation model are also calculated. The results are compared with calculations made within QPM with account of Coriolis interaction but excluding the Pauli principle in the wave state function. It is shown that to describe correctly both the level structure and reduced probabilities B(E2) it is necessary to include all types of interaction : quasiparticle interaction with phonons with account of the Pauli principle in the wave state functions and Coriolis interactions. Now no uniform theoretical approach exists

  6. On the properties of collision probability integrals in annular geometry-II evaluation

    International Nuclear Information System (INIS)

    Milgram, M.S.; Sly, K.N.

    1979-02-01

    To calculate neutron flux distributions in infinitely long annular regions, the inner-outer and outer-outer transmission probabilities psup(io) and psup(oo) are required. Efficient algorithms for the computation of these probabilities as functions of two variables (the ratio of inner/outer radii kappa, and cross-section Σ) are given for 0 -5 . (author)

  7. Influence of age on postural sway during different dual-task conditions.

    Directory of Open Access Journals (Sweden)

    Marco eBergamin

    2014-10-01

    Full Text Available Dual-task performance assessments of competing parallel tasks and postural outcomes are growing in importance for geriatricians, as it is associated with predicting fall risk in older adults. This study aims to evaluate the postural stability during different dual-task conditions including visual (SMBT, verbal (CBAT and cognitive (MAT tasks in comparison with the standard Romberg’s open eyes position (OE. Furthermore, these conditions were investigated in a sample of young adults and a group of older healthy subjects to examine a potential interaction between type of secondary task and age status. To compare these groups across the four conditions, a within-between mixed model ANOVA was applied. Thus, a stabilometric platform has been used to measure center of pressure velocity (CoPV, sway area (SA, antero-posterior (AP and medio-lateral (ML oscillations as extents of postural sway. Tests of within-subjects effects indicated that different four conditions influenced the static balance for CoPV (p<0.001, SA (p<0.001. Post-hoc analyses indicated that CBAT task induced the worst balance condition on CoPV and resulted in significantly worse scores than OE (-11.4%; p<0.05, SMBT (-17.8% p<0.01 and MAT (-17.8% p<0.01 conditions; the largest SA was found in OE, and it was statistically larger than SMBT (-27.0%, p<0.01 and MAT (-23.1%; p<0.01. The between-subjects analysis indicated a general lower balance control in the group of elderly subjects (CoPV p<0.001, SA p<0.002, while, the mixed model ANOVA did not detect any interaction effect between types of secondary task and groups in any parameters (CoPV p=0.154, SA p=0.125. Postural sway during dual-task assessments was also found to decrease with advancing age, however, no interactions between aging and types of secondary tasks were found. Overall, these results indicated that the secondary task which most influenced the length of sway path, as measured by postural stability was a simple verbal

  8. Grit-mediated frictional ignition of a polymer-bonded explosive during oblique impacts: Probability calculations for safety engineering

    International Nuclear Information System (INIS)

    Heatwole, Eric; Parker, Gary; Holmes, Matt; Dickson, Peter

    2015-01-01

    Frictional heating of high-melting-point grit particles during oblique impacts of consolidated explosives is considered to be the major source of ignition in accidents involving dropped explosives. It has been shown in other work that the lower temperature melting point of two frictionally interacting surfaces will cap the maximum temperature reached, which provides a simple way to mitigate the danger in facilities by implementing surfaces with melting points below the ignition temperature of the explosive. However, a recent series of skid testing experiments has shown that ignition can occur on low-melting-point surfaces with a high concentration of grit particles, most likely due to a grit–grit collision mechanism. For risk-based safety engineering purposes, the authors present a method to estimate the probability of grit contact and/or grit–grit collision during an oblique impact. These expressions are applied to potentially high-consequence oblique impact scenarios in order to give the probability of striking one or more grit particles (for high-melting-point surfaces), or the probability of one or more grit–grit collisions occurring (for low-melting-point surfaces). The probability is dependent on a variety of factors, many of which can be controlled for mitigation to achieve acceptable risk levels for safe explosives handling operations. - Highlights: • Unexpectedly, grit-mediated ignition of a PBX occurred on low-melting point surfaces. • On high-melting surfaces frictional heating is due to a grit–surface interaction. • For low-melting point surfaces the heating mechanism is grit–grit collisions. • A method for estimating the probability of ignition is presented for both surfaces

  9. Targeting the probability versus cost of feared outcomes in public speaking anxiety.

    Science.gov (United States)

    Nelson, Elizabeth A; Deacon, Brett J; Lickel, James J; Sy, Jennifer T

    2010-04-01

    Cognitive-behavioral theory suggests that social phobia is maintained, in part, by overestimates of the probability and cost of negative social events. Indeed, empirically supported cognitive-behavioral treatments directly target these cognitive biases through the use of in vivo exposure or behavioral experiments. While cognitive-behavioral theories and treatment protocols emphasize the importance of targeting probability and cost biases in the reduction of social anxiety, few studies have examined specific techniques for reducing probability and cost bias, and thus the relative efficacy of exposure to the probability versus cost of negative social events is unknown. In the present study, 37 undergraduates with high public speaking anxiety were randomly assigned to a single-session intervention designed to reduce either the perceived probability or the perceived cost of negative outcomes associated with public speaking. Compared to participants in the probability treatment condition, those in the cost treatment condition demonstrated significantly greater improvement on measures of public speaking anxiety and cost estimates for negative social events. The superior efficacy of the cost treatment condition was mediated by greater treatment-related changes in social cost estimates. The clinical implications of these findings are discussed. Published by Elsevier Ltd.

  10. The transition probability and the probability for the left-most particle's position of the q-totally asymmetric zero range process

    Energy Technology Data Exchange (ETDEWEB)

    Korhonen, Marko [Department of Mathematics and Statistics, University of Helsinki, FIN-00014 (Finland); Lee, Eunghyun [Centre de Recherches Mathématiques (CRM), Université de Montréal, Quebec H3C 3J7 (Canada)

    2014-01-15

    We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle's position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.

  11. Bremsstrahlung emission probability in the {alpha} decay of {sup 210}Po

    Energy Technology Data Exchange (ETDEWEB)

    Boie, Hans-Hermann

    2009-06-03

    A high-statistics measurement of bremsstrahlung emitted in the {alpha} decay of {sup 210}Po has been performed. The measured differential emission probabilities, which could be followed up to {gamma}-energies of {proportional_to} 500 keV, allow for the first time for a serious test of various model calculations of the bremsstrahlung accompanied {alpha} decay. It is shown that corrections to the {alpha}-{gamma} angular correlation due to the interference between the electric dipole and quadrupole amplitudes and due to the relativistic character of the process have to be taken into account. With the experimentally derived angular correlation the measured energydifferential bremsstrahlung emission probabilities show excellent agreement with the fully quantum mechanical calculation. (orig.)

  12. Wigner function and the probability representation of quantum states

    Directory of Open Access Journals (Sweden)

    Man’ko Margarita A.

    2014-01-01

    Full Text Available The relation of theWigner function with the fair probability distribution called tomographic distribution or quantum tomogram associated with the quantum state is reviewed. The connection of the tomographic picture of quantum mechanics with the integral Radon transform of the Wigner quasidistribution is discussed. The Wigner–Moyal equation for the Wigner function is presented in the form of kinetic equation for the tomographic probability distribution both in quantum mechanics and in the classical limit of the Liouville equation. The calculation of moments of physical observables in terms of integrals with the state tomographic probability distributions is constructed having a standard form of averaging in the probability theory. New uncertainty relations for the position and momentum are written in terms of optical tomograms suitable for directexperimental check. Some recent experiments on checking the uncertainty relations including the entropic uncertainty relations are discussed.

  13. Application of Probability Calculations to the Study of the Permissible Step and Touch Potentials to Ensure Personnel Safety

    International Nuclear Information System (INIS)

    Eisawy, E.A.

    2011-01-01

    The aim of this paper is to develop a practical method to evaluate the actual step and touch potential distributions in order to determine the risk of failure of the grounding system. The failure probability, indicating the safety level of the grounding system, is related to both applied (stress) and withstand (strength) step or touch potentials. The probability distributions of the applied step and touch potentials as well as the corresponding withstand step and touch potentials which represent the capability of the human body to resist stress potentials are presented. These two distributions are used to evaluate the failure probability of the grounding system which denotes the probability that the applied potential exceeds the withstand potential. The method is accomplished in considering the resistance of the human body, the foot contact resistance and the fault clearing time as an independent random variables, rather than fixed values as treated in the previous analysis in determining the safety requirements for a given grounding system

  14. Failure probability estimate of type 304 stainless steel piping

    International Nuclear Information System (INIS)

    Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L.; Mehta, H.S.; Ranganath, S.

    1989-01-01

    The primary source of in-service degradation of the SRS production reactor process water piping is intergranular stress corrosion cracking (IGSCC). IGSCC has occurred in a limited number of weld heat affected zones, areas known to be susceptible to IGSCC. A model has been developed to combine crack growth rates, crack size distributions, in-service examination reliability estimates and other considerations to estimate the pipe large-break frequency. This frequency estimates the probability that an IGSCC crack will initiate, escape detection by ultrasonic (UT) examination, and grow to instability prior to extending through-wall and being detected by the sensitive leak detection system. These events are combined as the product of four factors: (1) the probability that a given weld heat affected zone contains IGSCC; (2) the conditional probability, given the presence of IGSCC, that the cracking will escape detection during UT examination; (3) the conditional probability, given a crack escapes detection by UT, that it will not grow through-wall and be detected by leakage; (4) the conditional probability, given a crack is not detected by leakage, that it grows to instability prior to the next UT exam. These four factors estimate the occurrence of several conditions that must coexist in order for a crack to lead to a large break of the process water piping. When evaluated for the SRS production reactors, they produce an extremely low break frequency. The objective of this paper is to present the assumptions, methodology, results and conclusions of a probabilistic evaluation for the direct failure of the primary coolant piping resulting from normal operation and seismic loads. This evaluation was performed to support the ongoing PRA effort and to complement deterministic analyses addressing the credibility of a double-ended guillotine break

  15. A framework to assess diagnosis error probabilities in the advanced MCR

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Kim, Jong Hyun [Chosun University, Gwangju (Korea, Republic of); Jang, Inseok; Park, Jinkyun [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The Institute of Nuclear Power Operations (INPO)’s operating experience database revealed that about 48% of the total events in world NPPs for 2 years (2010-2011) happened due to human errors. The purposes of human reliability analysis (HRA) method are to evaluate the potential for, and mechanism of, human errors that may affect plant safety. Accordingly, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. Many researchers have asserted that procedure, alarm, and display are critical factors to affect operators’ generic activities, especially for diagnosis activities. None of various HRA methods was explicitly designed to deal with digital systems. SCHEME (Soft Control Human error Evaluation MEthod) considers only for the probability of soft control execution error in the advanced MCR. The necessity of developing HRA methods in various conditions of NPPs has been raised. In this research, the framework to estimate diagnosis error probabilities in the advanced MCR was suggested. The assessment framework was suggested by three steps. The first step is to investigate diagnosis errors and calculate their probabilities. The second step is to quantitatively estimate PSFs’ weightings in the advanced MCR. The third step is to suggest the updated TRC model to assess the nominal diagnosis error probabilities. Additionally, the proposed framework was applied by using the full-scope simulation. Experiments conducted in domestic full-scope simulator and HAMMLAB were used as data-source. Total eighteen tasks were analyzed and twenty-three crews participated in.

  16. Approximation of ruin probabilities via Erlangized scale mixtures

    DEFF Research Database (Denmark)

    Peralta, Oscar; Rojas-Nandayapa, Leonardo; Xie, Wangyue

    2018-01-01

    In this paper, we extend an existing scheme for numerically calculating the probability of ruin of a classical Cramér–Lundbergreserve process having absolutely continuous but otherwise general claim size distributions. We employ a dense class of distributions that we denominate Erlangized scale...... a simple methodology for constructing a sequence of distributions having the form Π⋆G with the purpose of approximating the integrated tail distribution of the claim sizes. Then we adapt a recent result which delivers an explicit expression for the probability of ruin in the case that the claim size...... distribution is modeled as an Erlangized scale mixture. We provide simplified expressions for the approximation of the probability of ruin and construct explicit bounds for the error of approximation. We complement our results with a classical example where the claim sizes are heavy-tailed....

  17. Future southcentral US wildfire probability due to climate change

    Science.gov (United States)

    Stambaugh, Michael C.; Guyette, Richard P.; Stroh, Esther D.; Struckhoff, Matthew A.; Whittier, Joanna B.

    2018-01-01

    Globally, changing fire regimes due to climate is one of the greatest threats to ecosystems and society. In this paper, we present projections of future fire probability for the southcentral USA using downscaled climate projections and the Physical Chemistry Fire Frequency Model (PC2FM). Future fire probability is projected to both increase and decrease across the study region of Oklahoma, New Mexico, and Texas. Among all end-of-century projections, change in fire probabilities (CFPs) range from − 51 to + 240%. Greatest absolute increases in fire probability are shown for areas within the range of approximately 75 to 160 cm mean annual precipitation (MAP), regardless of climate model. Although fire is likely to become more frequent across the southcentral USA, spatial patterns may remain similar unless significant increases in precipitation occur, whereby more extensive areas with increased fire probability are predicted. Perhaps one of the most important results is illumination of climate changes where fire probability response (+, −) may deviate (i.e., tipping points). Fire regimes of southcentral US ecosystems occur in a geographic transition zone from reactant- to reaction-limited conditions, potentially making them uniquely responsive to different scenarios of temperature and precipitation changes. Identification and description of these conditions may help anticipate fire regime changes that will affect human health, agriculture, species conservation, and nutrient and water cycling.

  18. Leisure as a resource for successful aging by older adults with chronic health conditions.

    Science.gov (United States)

    Hutchinson, Susan L; Nimrod, Galit

    2012-01-01

    Drawing on the model of Selective Optimization with Compensation (SOC) (Baltes & Baltes, 1990), the purpose of this article is to examine leisure-related goals of older adults with chronic conditions and the strategies they use to not only successfully manage their chronic health conditions but live well with them. Semi-structured in-person interviews were conducted with 18 community-dwelling older adults (nine males, nine females, ages 58-87 years) with a variety of chronic conditions. Inductive and deductive within and cross-case thematic analyses resulted in descriptions of changes and continuity in participants' leisure participation following the onset of their chronic condition and construction of four themes: drawing on existing resources for continued involvement, setting leisure-based goals, using strategies to get more out of life, and more than managing: living a life of meaning. Implications for promoting successful aging are discussed, specifically the benefits of incorporating information and skill-building to help older adults recognize that leisure can be a resource for healthy aging and self-managing their chronic health condition.

  19. Posterior probability of linkage and maximal lod score.

    Science.gov (United States)

    Génin, E; Martinez, M; Clerget-Darpoux, F

    1995-01-01

    To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.

  20. Quantum probability and quantum decision-making.

    Science.gov (United States)

    Yukalov, V I; Sornette, D

    2016-01-13

    A rigorous general definition of quantum probability is given, which is valid not only for elementary events but also for composite events, for operationally testable measurements as well as for inconclusive measurements, and also for non-commuting observables in addition to commutative observables. Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision-making on the same common mathematical footing. Conditions are formulated for the case when quantum decision theory reduces to its classical counterpart and for the situation where the use of quantum decision theory is necessary. © 2015 The Author(s).