WorldWideScience

Sample records for regulation probability method

  1. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  2. Imprecise Probability Methods for Weapons UQ

    Energy Technology Data Exchange (ETDEWEB)

    Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-13

    Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.

  3. Monte Carlo methods to calculate impact probabilities

    Science.gov (United States)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  4. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  5. Probability

    CERN Document Server

    Shiryaev, A N

    1996-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

  6. Further comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1997-05-23

    The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).

  7. The transmission probability method in one-dimensional cylindrical geometry

    International Nuclear Information System (INIS)

    Rubin, I.E.

    1983-01-01

    The collision probability method widely used in solving the problems of neutron transpopt in a reactor cell is reliable for simple cells with small number of zones. The increase of the number of zones and also taking into account the anisotropy of scattering greatly increase the scope of calculations. In order to reduce the time of calculation the transmission probability method is suggested to be used for flux calculation in one-dimensional cylindrical geometry taking into account the scattering anisotropy. The efficiency of the suggested method is verified using the one-group calculations for cylindrical cells. The use of the transmission probability method allows to present completely angular and spatial dependences is neutrons distributions without the increase in the scope of calculations. The method is especially effective in solving the multi-group problems

  8. Jump probabilities in the non-Markovian quantum jump method

    International Nuclear Information System (INIS)

    Haerkoenen, Kari

    2010-01-01

    The dynamics of a non-Markovian open quantum system described by a general time-local master equation is studied. The propagation of the density operator is constructed in terms of two processes: (i) deterministic evolution and (ii) evolution of a probability density functional in the projective Hilbert space. The analysis provides a derivation for the jump probabilities used in the recently developed non-Markovian quantum jump (NMQJ) method (Piilo et al 2008 Phys. Rev. Lett. 100 180402).

  9. Bayesian maximum posterior probability method for interpreting plutonium urinalysis data

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.

    1996-01-01

    A new internal dosimetry code for interpreting urinalysis data in terms of radionuclide intakes is described for the case of plutonium. The mathematical method is to maximise the Bayesian posterior probability using an entropy function as the prior probability distribution. A software package (MEMSYS) developed for image reconstruction is used. Some advantages of the new code are that it ensures positive calculated dose, it smooths out fluctuating data, and it provides an estimate of the propagated uncertainty in the calculated doses. (author)

  10. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  11. Calculating the albedo characteristics by the method of transmission probabilities

    International Nuclear Information System (INIS)

    Lukhvich, A.A.; Rakhno, I.L.; Rubin, I.E.

    1983-01-01

    The possibility to use the method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones is studied. The transmission probabilities method is a numerical method for solving the transport equation in the integrated form. All calculations have been conducted as a one-group approximation for the planes and rods with different optical thicknesses and capture-to-scattering ratios. Above calculations for plane and cylindrical geometries have shown the possibility to use the numerical method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones with high accuracy. In this case the computer time consumptions are minimum even with the cylindrical geometry, if the interpolation calculation of characteristics is used for the neutrons of the first path

  12. COMPARATIVE ANALYSIS OF ESTIMATION METHODS OF PHARMACY ORGANIZATION BANKRUPTCY PROBABILITY

    Directory of Open Access Journals (Sweden)

    V. L. Adzhienko

    2014-01-01

    Full Text Available A purpose of this study was to determine the probability of bankruptcy by various methods in order to predict the financial crisis of pharmacy organization. Estimating the probability of pharmacy organization bankruptcy was conducted using W. Beaver’s method adopted in the Russian Federation, with integrated assessment of financial stability use on the basis of scoring analysis. The results obtained by different methods are comparable and show that the risk of bankruptcy of the pharmacy organization is small.

  13. Thermal disadvantage factor calculation by the multiregion collision probability method

    International Nuclear Information System (INIS)

    Ozgener, B.; Ozgener, H.A.

    2004-01-01

    A multi-region collision probability formulation that is capable of applying white boundary condition directly is presented and applied to thermal neutron transport problems. The disadvantage factors computed are compared with their counterparts calculated by S N methods with both direct and indirect application of white boundary condition. The results of the ABH and collision probability method with indirect application of white boundary condition are also considered and comparisons with benchmark Monte Carlo results are carried out. The studies show that the proposed formulation is capable of calculating thermal disadvantage factor with sufficient accuracy without resorting to the fictitious scattering outer shell approximation associated with the indirect application of the white boundary condition in collision probability solutions

  14. The method of modular characteristic direction probabilities in MPACT

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Z. [School of Nuclear Science and Technology, Xi' an Jiaotong University, No. 28 Xianning west road, Xi' an, Shaanxi 710049 (China); Kochunas, B.; Collins, B.; Downar, T. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2200 Bonisteel, Ann Arbor, MI 48109 (United States); Wu, H. [School of Nuclear Science and Technology, Xi' an Jiaotong University, No. 28 Xianning west road, Xi' an, Shaanxi 710049 (China)

    2013-07-01

    The method of characteristic direction probabilities (CDP) is based on a modular ray tracing technique which combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC). This past year CDP was implemented in the transport code MPACT for 2-D and 3-D transport calculations. By only coupling the fine mesh regions passed by the characteristic rays in the particular direction, the scale of the probabilities matrix is much smaller compared to the CPM. At the same time, the CDP has the same capacity of dealing with the complicated geometries with the MOC, because the same modular ray tracing techniques are used. Results from the C5G7 benchmark problems are given for different cases to show the accuracy and efficiency of the CDP compared to MOC. For the cases examined, the CDP and MOC methods were seen to differ in k{sub eff} by about 1-20 pcm, and the computational efficiency of the CDP appears to be better than the MOC for some problems. However, in other problems, particularly when the CDP matrices have to be recomputed from changing cross sections, the CDP does not perform as well. This indicates an area of future work. (authors)

  15. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  16. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  17. METHOD OF FOREST FIRES PROBABILITY ASSESSMENT WITH POISSON LAW

    Directory of Open Access Journals (Sweden)

    A. S. Plotnikova

    2016-01-01

    Full Text Available The article describes the method for the forest fire burn probability estimation on a base of Poisson distribution. The λ parameter is assumed to be a mean daily number of fires detected for each Forest Fire Danger Index class within specific period of time. Thus, λ was calculated for spring, summer and autumn seasons separately. Multi-annual daily Forest Fire Danger Index values together with EO-derived hot spot map were input data for the statistical analysis. The major result of the study is generation of the database on forest fire burn probability. Results were validated against EO daily data on forest fires detected over Irkutsk oblast in 2013. Daily weighted average probability was shown to be linked with the daily number of detected forest fires. Meanwhile, there was found a number of fires which were developed when estimated probability was low. The possible explanation of this phenomenon was provided.

  18. Comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1996-07-01

    In this paper the classical sequential probability ratio testing method (SPRT) is reconsidered. Every individual boundary crossing event of the SPRT is regarded as a new piece of evidence about the problem under hypothesis testing. The Bayes method is applied for belief updating, i.e. integrating these individual decisions. The procedure is recommended to use when the user (1) would like to be informed about the tested hypothesis continuously and (2) would like to achieve his final conclusion with high confidence level. (Author).

  19. Development of Thresholds and Exceedance Probabilities for Influent Water Quality to Meet Drinking Water Regulations

    Science.gov (United States)

    Reeves, K. L.; Samson, C.; Summers, R. S.; Balaji, R.

    2017-12-01

    Drinking water treatment utilities (DWTU) are tasked with the challenge of meeting disinfection and disinfection byproduct (DBP) regulations to provide safe, reliable drinking water under changing climate and land surface characteristics. DBPs form in drinking water when disinfectants, commonly chlorine, react with organic matter as measured by total organic carbon (TOC), and physical removal of pathogen microorganisms are achieved by filtration and monitored by turbidity removal. Turbidity and TOC in influent waters to DWTUs are expected to increase due to variable climate and more frequent fires and droughts. Traditional methods for forecasting turbidity and TOC require catchment specific data (i.e. streamflow) and have difficulties predicting them under non-stationary climate. A modelling framework was developed to assist DWTUs with assessing their risk for future compliance with disinfection and DBP regulations under changing climate. A local polynomial method was developed to predict surface water TOC using climate data collected from NOAA, Normalized Difference Vegetation Index (NDVI) data from the IRI Data Library, and historical TOC data from three DWTUs in diverse geographic locations. Characteristics from the DWTUs were used in the EPA Water Treatment Plant model to determine thresholds for influent TOC that resulted in DBP concentrations within compliance. Lastly, extreme value theory was used to predict probabilities of threshold exceedances under the current climate. Results from the utilities were used to produce a generalized TOC threshold approach that only requires water temperature and bromide concentration. The threshold exceedance model will be used to estimate probabilities of exceedances under projected climate scenarios. Initial results show that TOC can be forecasted using widely available data via statistical methods, where temperature, precipitation, Palmer Drought Severity Index, and NDVI with various lags were shown to be important

  20. The application of probability methods for safeguards purposes

    International Nuclear Information System (INIS)

    Rumyantsev, A.N.

    1976-01-01

    The authors consider possible ways of applying probability methods to solve problems involved in accounting for nuclear materials. The increase in the flow of nuclear materials subject to IAEA safeguards makes it necessary to increase the accuracy of determination of the actual quantities of nuclear materials at all stages of their processing and use. It is proposed that the IAEA's automated system of accounting for nuclear materials, based on accounting information for each material balance zone and the results of random experimental checks performed by IAEA inspectors, be supplemented with mathematical models of the flow of nuclear materials in each balance zone based on the data supplied for each facility in the balance zone when it was placed under safeguards. The statistical error in determining the material balance and the material unaccounted for can be considerably reduced in this way even if the experimental control methods are retained. (author)

  1. Neutron transport by collision probability method in complicated geometries

    International Nuclear Information System (INIS)

    Constantin, Marin

    2000-01-01

    For the first flight collision probability (FFCP) method a rapidly increasing of the memory requirements and execution time with the number of discrete regions occurs. Generally, the use of the method is restricted at cell/supercell level. However, the amazing developments both in computer hardware and computer architecture allow a real extending of the problems' domain and a more detailed treatment of the geometry. Two ways are discussed into the paper: the direct design of new codes and the improving of the mainframe old versions. The author's experience is focused on the performances' improving of the 3D integral transport code PIJXYZ (from an old version to a modern one) and on the design and developing of the 2D transport code CP 2 D in the last years. In the first case an optimization process have been performed before the parallelization. In the second a modular design and the newest techniques (factorization of the geometry, the macrobands method, the mobile set of chords, the automatic calculation of the integration error, optimal algorithms for the innermost programming level, the mixed method for tracking process and CPs calculation, etc.) were adopted. In both cases the parallelization uses a PCs network system. Some short examples for CP 2 D and PIJXYZ calculation are presented: reactivity void effect in typical CANDU cells using a multistratified coolant model, a problem of some adjacent fuel assemblies, CANDU reactivity devices 3D simulation. (author)

  2. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  3. Probability Density Function Method for Observing Reconstructed Attractor Structure

    Institute of Scientific and Technical Information of China (English)

    陆宏伟; 陈亚珠; 卫青

    2004-01-01

    Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.

  4. Analytic methods in applied probability in memory of Fridrikh Karpelevich

    CERN Document Server

    Suhov, Yu M

    2002-01-01

    This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable

  5. Methods for estimating drought streamflow probabilities for Virginia streams

    Science.gov (United States)

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  6. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  7. Critical review of the probability of causation method

    International Nuclear Information System (INIS)

    Cox, L.A. Jr.; Fiksel, J.R.

    1985-01-01

    In a more controversial report than the others in the study, the authors use one scientific discipline to review the work of another discipline. Their proposal recognizes the imprecision that develops in moving from group to individual interpretations of causal effects by substituting the term assigned share for probability of causation. The authors conclude that the use of a formula will not provide reliable measures of risk attribution in individual cases. The gap between scientific certainty and assigning shares of responsibility must be filled by subjective value judgments supplied by the scientists. 22 references, 2 figures, 4 tables

  8. Discrete probability models and methods probability on graphs and trees, Markov chains and random fields, entropy and coding

    CERN Document Server

    Brémaud, Pierre

    2017-01-01

    The emphasis in this book is placed on general models (Markov chains, random fields, random graphs), universal methods (the probabilistic method, the coupling method, the Stein-Chen method, martingale methods, the method of types) and versatile tools (Chernoff's bound, Hoeffding's inequality, Holley's inequality) whose domain of application extends far beyond the present text. Although the examples treated in the book relate to the possible applications, in the communication and computing sciences, in operations research and in physics, this book is in the first instance concerned with theory. The level of the book is that of a beginning graduate course. It is self-contained, the prerequisites consisting merely of basic calculus (series) and basic linear algebra (matrices). The reader is not assumed to be trained in probability since the first chapters give in considerable detail the background necessary to understand the rest of the book. .

  9. Hybridization of the probability perturbation method with gradient information

    DEFF Research Database (Denmark)

    Johansen, Kent; Caers, J.; Suzuki, S.

    2007-01-01

    Geostatistically based history-matching methods make it possible to devise history-matching strategies that will honor geologic knowledge about the reservoir. However, the performance of these methods is known to be impeded by slow convergence rates resulting from the stochastic nature of the alg...

  10. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    Science.gov (United States)

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  11. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  12. Risk prediction, safety analysis and quantitative probability methods - a caveat

    International Nuclear Information System (INIS)

    Critchley, O.H.

    1976-01-01

    Views are expressed on the use of quantitative techniques for the determination of value judgements in nuclear safety assessments, hazard evaluation, and risk prediction. Caution is urged when attempts are made to quantify value judgements in the field of nuclear safety. Criteria are given the meaningful application of reliability methods but doubts are expressed about their application to safety analysis, risk prediction and design guidances for experimental or prototype plant. Doubts are also expressed about some concomitant methods of population dose evaluation. The complexities of new designs of nuclear power plants make the problem of safety assessment more difficult but some possible approaches are suggested as alternatives to the quantitative techniques criticized. (U.K.)

  13. Reliability analysis of reactor systems by applying probability method; Analiza pouzdanosti reaktorskih sistema primenom metoda verovatnoce

    Energy Technology Data Exchange (ETDEWEB)

    Milivojevic, S [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1974-12-15

    Probability method was chosen for analysing the reactor system reliability is considered realistic since it is based on verified experimental data. In fact this is a statistical method. The probability method developed takes into account the probability distribution of permitted levels of relevant parameters and their particular influence on the reliability of the system as a whole. The proposed method is rather general, and was used for problem of thermal safety analysis of reactor system. This analysis enables to analyze basic properties of the system under different operation conditions, expressed in form of probability they show the reliability of the system on the whole as well as reliability of each component.

  14. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  15. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  16. Molecular machines regulating the release probability of synaptic vesicles at the active zone.

    Directory of Open Access Journals (Sweden)

    Christoph eKoerber

    2016-03-01

    Full Text Available The fusion of synaptic vesicles (SVs with the plasma membrane of the active zone (AZ upon arrival of an action potential (AP at the presynaptic compartment is a tightly regulated probabil-istic process crucial for information transfer. The probability of a SV to release its transmitter content in response to an AP, termed release probability (Pr, is highly diverse both at the level of entire synapses and individual SVs at a given synapse. Differences in Pr exist between different types of synapses, between synapses of the same type, synapses originating from the same axon and even between different SV subpopulations within the same presynaptic terminal. The Pr of SVs at the AZ is set by a complex interplay of different presynaptic properties including the availability of release-ready SVs, the location of the SVs relative to the voltage-gated calcium channels (VGCCs at the AZ, the magnitude of calcium influx upon arrival of the AP, the buffer-ing of calcium ions as well as the identity and sensitivity of the calcium sensor. These properties are not only interconnected, but can also be regulated dynamically to match the requirements of activity patterns mediated by the synapse. Here, we review recent advances in identifying mole-cules and molecular machines taking part in the determination of vesicular Pr at the AZ.

  17. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    Science.gov (United States)

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism

  18. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  19. Method to Calculate Accurate Top Event Probability in a Seismic PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong Univ., Seoul (Korea, Republic of)

    2014-05-15

    ACUBE(Advanced Cutset Upper Bound Estimator) calculates the top event probability and importance measures from cutsets by dividing cutsets into major and minor groups depending on the cutset probability, where the cutsets that have higher cutset probability are included in the major group and the others in minor cutsets, converting major cutsets into a Binary Decision Diagram (BDD). By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. ACUBE works by dividing the cutsets into two groups (higher and lower cutset probability groups), calculating the top event probability and importance measures in each group, and combining the two results from the two groups. Here, ACUBE calculates the top event probability and importance measures of the higher cutset probability group exactly. On the other hand, ACUBE calculates these measures of the lower cutset probability group with an approximation such as MCUB. The ACUBE algorithm is useful for decreasing the conservatism that is caused by approximating the top event probability and importance measure calculations with given cutsets. By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. This study shows that careful attention should be paid and an appropriate method be provided in order to avoid the significant overestimation of the top event probability calculation. Due to the strength of ACUBE that is explained in this study, the ACUBE became a vital tool for calculating more accurate CDF of the seismic PSA cutsets than the conventional probability calculation method.

  20. Assessment procedure and probability determination methods of aircraft crash events in siting for nuclear power plants

    International Nuclear Information System (INIS)

    Zheng Qiyan; Zhang Lijun; Huang Weiqi; Yin Qingliao

    2010-01-01

    Assessment procedure of aircraft crash events in siting for nuclear power plants, and the methods of probability determination in two different stages of prelimi- nary screening and detailed evaluation are introduced in this paper. Except for general air traffic, airport operations and aircraft in the corridor, the probability of aircraft crash by military operation in the military airspaces is considered here. (authors)

  1. Calculation of transition probabilities using the multiconfiguration Dirac-Fock method

    International Nuclear Information System (INIS)

    Kim, Yong Ki; Desclaux, Jean Paul; Indelicato, Paul

    1998-01-01

    The performance of the multiconfiguration Dirac-Fock (MCDF) method in calculating transition probabilities of atoms is reviewed. In general, the MCDF wave functions will lead to transition probabilities accurate to ∼ 10% or better for strong, electric-dipole allowed transitions for small atoms. However, it is more difficult to get reliable transition probabilities for weak transitions. Also, some MCDF wave functions for a specific J quantum number may not reduce to the appropriate L and S quantum numbers in the nonrelativistic limit. Transition probabilities calculated from such MCDF wave functions for nonrelativistically forbidden transitions are unreliable. Remedies for such cases are discussed

  2. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    2003-01-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be

  3. Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.

    Science.gov (United States)

    Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay

    2018-04-17

    In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  5. Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...

  6. An adjusted probability method for the identification of sociometric status in classrooms

    NARCIS (Netherlands)

    García Bacete, F.J.; Cillessen, A.H.N.

    2017-01-01

    Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB) in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of

  7. DEVELOPMENT OF THE PROBABLY-GEOGRAPHICAL FORECAST METHOD FOR DANGEROUS WEATHER PHENOMENA

    Directory of Open Access Journals (Sweden)

    Elena S. Popova

    2015-12-01

    Full Text Available This paper presents a scheme method of probably-geographical forecast for dangerous weather phenomena. Discuss two general realization stages of this method. Emphasize that developing method is response to actual questions of modern weather forecast and it’s appropriate phenomena: forecast is carried out for specific point in space and appropriate moment of time.

  8. 8th International Conference on Soft Methods in Probability and Statistics

    CERN Document Server

    Giordani, Paolo; Vantaggi, Barbara; Gagolewski, Marek; Gil, María; Grzegorzewski, Przemysław; Hryniewicz, Olgierd

    2017-01-01

    This proceedings volume is a collection of peer reviewed papers presented at the 8th International Conference on Soft Methods in Probability and Statistics (SMPS 2016) held in Rome (Italy). The book is dedicated to Data science which aims at developing automated methods to analyze massive amounts of data and to extract knowledge from them. It shows how Data science employs various programming techniques and methods of data wrangling, data visualization, machine learning, probability and statistics. The soft methods proposed in this volume represent a collection of tools in these fields that can also be useful for data science.

  9. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    Science.gov (United States)

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  11. Probability approaching method (PAM) and its application on fuel management optimization

    International Nuclear Information System (INIS)

    Liu, Z.; Hu, Y.; Shi, G.

    2004-01-01

    For multi-cycle reloading optimization problem, a new solving scheme is presented. The multi-cycle problem is de-coupled into a number of relatively independent mono-cycle issues, then this non-linear programming problem with complex constraints is solved by an advanced new algorithm -probability approaching method (PAM), which is based on probability theory. The result on simplified core model shows well effect of this new multi-cycle optimization scheme. (authors)

  12. An evaluation method for tornado missile strike probability with stochastic correction

    International Nuclear Information System (INIS)

    Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo

    2017-01-01

    An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure

  13. An evaluation method for tornado missile strike probability with stochastic correction

    Energy Technology Data Exchange (ETDEWEB)

    Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo [Nuclear Risk Research Center (External Natural Event Research Team), Central Research Institute of Electric Power Industry, Abiko (Japan)

    2017-03-15

    An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure.

  14. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    Science.gov (United States)

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  15. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  16. Probability-neighbor method of accelerating geometry treatment in reactor Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Li, Zeguang; Xu, Qi; Wang, Kan; Yu, Ganglin

    2011-01-01

    Probability neighbor method (PNM) is proposed in this paper to accelerate geometry treatment of Monte Carlo (MC) simulation and validated in self-developed reactor Monte Carlo code RMC. During MC simulation by either ray-tracking or delta-tracking method, large amounts of time are spent in finding out which cell one particle is located in. The traditional way is to search cells one by one with certain sequence defined previously. However, this procedure becomes very time-consuming when the system contains a large number of cells. Considering that particles have different probability to enter different cells, PNM method optimizes the searching sequence, i.e., the cells with larger probability are searched preferentially. The PNM method is implemented in RMC code and the numerical results show that the considerable time of geometry treatment in MC calculation for complicated systems is saved, especially effective in delta-tracking simulation. (author)

  17. The Most Probable Limit of Detection (MPL) for rapid microbiological methods

    NARCIS (Netherlands)

    Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, E.R. van den

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on

  18. The most probable limit of detection (MPL) for rapid microbiological methods

    NARCIS (Netherlands)

    Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, van den E.R.

    2010-01-01

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on

  19. Emission probability determination of {sup 133}Ba by the sum-peak method

    Energy Technology Data Exchange (ETDEWEB)

    Silva, R.L. da; Almeida, M.C.M. de; Delgado, J.U.; Poledna, R.; Araujo, M.T.F.; Trindade, O.L.; Veras, E.V. de; Santos, A.; Rangel, J.; Ferreira Filho, A.L., E-mail: ronaldo@ird.gov.br, E-mail: marcandida@yahoo.com.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2016-07-01

    The National Laboratory of Metrology Ionizing Radiation (LNMRI/IRD/CNEN) has several measurement methods in order to ensure low uncertainties about the results. Through gamma spectrometry analysis by sum-peak absolute method they were performed the standardization of {sup 133}Ba activity and your emission probability determination of different energies with reduced uncertainties. The advantages of radionuclides calibrations by absolute method are accuracy, low uncertainties and is not necessary the use of radionuclides reference standards. {sup 133}Ba is used in research laboratories on calibration detectors in different work areas. The uncertainties for the activity and for the emission probability results are lower than 1%. (author)

  20. Multiregion, multigroup collision probability method with white boundary condition for light water reactor thermalization calculations

    International Nuclear Information System (INIS)

    Ozgener, B.; Ozgener, H.A.

    2005-01-01

    A multiregion, multigroup collision probability method with white boundary condition is developed for thermalization calculations of light water moderated reactors. Hydrogen scatterings are treated by Nelkin's kernel while scatterings from other nuclei are assumed to obey the free-gas scattering kernel. The isotropic return (white) boundary condition is applied directly by using the appropriate collision probabilities. Comparisons with alternate numerical methods show the validity of the present formulation. Comparisons with some experimental results indicate that the present formulation is capable of calculating disadvantage factors which are closer to the experimental results than alternative methods

  1. The application of probability methods with a view to improving the quality of equipment

    International Nuclear Information System (INIS)

    Carnino, A.; Gachot, B.; Greppo, J.-F.; Guitton, J.

    1976-01-01

    After stating that reliability and availability could be considered as parameters allowing the quality of equipment to be estimated, the chief aspects of the use of probability methods in the field of quality is described. These methods are mainly applied at the design, operation and maintenance level of the equipment, as well as at the compilation stage of the corresponding data [fr

  2. A method for estimating failure rates for low probability events arising in PSA

    International Nuclear Information System (INIS)

    Thorne, M.C.; Williams, M.M.R.

    1995-01-01

    The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary

  3. Decentralized method for utility regulation

    Energy Technology Data Exchange (ETDEWEB)

    Loeb, M. (North Carolina State Univ., Raleigh); Magat, W.A.

    1979-10-01

    A new institutional arrangement for regulating utilities is suggested that minimizes the costs of natural monopolies. A mixture of regulation and franchising, the plan draws on the advantages of each and eliminates many of the problems. The proposal allows utilities to set their own price on the basis of demand and marginal-cost projections. Subsidies are provided by the regulatory agency if there is a consumer surplus. The system encourages the utility to select a competitive price and to produce only the amount of service needed. Operating efficiency is encouraged by rewarding cost reductions and discouraging cost overstatement at the rate review. The regulatory agency would not need to take action to bring price and marginal costs into equality. The franchise sale can be made by competitive bidding, in which the bidders would capitalize part or all of the subsidy or the regulatory agency could recover the subsidy in a lump-sum tax on the utility.

  4. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  5. Using the probability method for multigroup calculations of reactor cells in a thermal energy range

    International Nuclear Information System (INIS)

    Rubin, I.E.; Pustoshilova, V.S.

    1984-01-01

    The possibility of using the transmission probability method with performance inerpolation for determining spatial-energy neutron flux distribution in cells of thermal heterogeneous reactors is considered. The results of multigroup calculations of several uranium-water plane and cylindrical cells with different fuel enrichment in a thermal energy range are given. A high accuracy of results is obtained with low computer time consumption. The use of the transmission probability method is particularly reasonable in algorithms of the programmes compiled computer with significant reserve of internal memory

  6. An Adjusted Probability Method for the Identification of Sociometric Status in Classrooms

    Directory of Open Access Journals (Sweden)

    Francisco J. García Bacete

    2017-10-01

    Full Text Available Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of each sociometric group, the sources for discrepant classifications between methods, the behavioral profiles of discrepant and consistent cases between methods, and age differences.Method: We compared the GB adjusted probability method with the standard score model proposed by Coie and Dodge (CD and the probability score model proposed by Newcomb and Bukowski (NB. The GB method is an adaptation of the NB method, cutoff scores are derived from the distribution of raw liked most and liked least scores in each classroom instead of using fixed and absolute scores as does NB method. The criteria for neglected status are also modified by the GB method. Participants were 569 children (45% girls from 23 elementary school classrooms (13 Grades 1–2, 10 Grades 5–6.Results: We found agreement as well as differences between the three methods. The CD method yielded discrepancies in the classifications because of its dependence on z-scores and composite dimensions. The NB method was less optimal in the validation of the behavioral characteristics of the sociometric groups, because of its fixed cutoffs for identifying preferred, rejected, and controversial children, and not differentiating between positive and negative nominations for neglected children. The GB method addressed some of the limitations of the other two methods. It improved the classified of neglected students, as well as discrepant cases of the preferred, rejected, and controversial groups. Agreement between methods was higher with the oldest children.Conclusion: GB is a valid sociometric method as evidences by the behavior profiles of the sociometric status groups identified with this method.

  7. A Comparison of Sequential and GPU Implementations of Iterative Methods to Compute Reachability Probabilities

    Directory of Open Access Journals (Sweden)

    Elise Cormie-Bowins

    2012-10-01

    Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.

  8. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  9. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  10. Disadvantage factors for square lattice cells using a collision probability method

    International Nuclear Information System (INIS)

    Raghav, H.P.

    1976-01-01

    The flux distribution in an infinite square lattice consisting of cylindrical fuel rods and moderator is calculated by using a collision probability method. Neutrons are assumed to be monoenergetic and the sources as well as scattering are assumed to be isotropic. Carlvik's method for the calculation of collision probability is used. The important features of the method are that the square boundary is treated exactly and the contribution of the surrounding cells is calculated explicitly. The method is programmed in a computer code CELLC. This carries out integration by Simpson's rule. The convergence and accuracy of CELLC is assessed by computing disadvantage factors for the well-known Thie lattices and comparing the results with Monte Carlo and other integral transport theory methods used elsewhere. It is demonstrated that it is not correct to apply the white boundary condition in the Wigner Seitz Cell for low pitch and low cross sections. (orig.) [de

  11. Decision making with consonant belief functions: Discrepancy resulting with the probability transformation method used

    Directory of Open Access Journals (Sweden)

    Cinicioglu Esma Nur

    2014-01-01

    Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.

  12. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    Science.gov (United States)

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  14. A method for the estimation of the probability of damage due to earthquakes

    International Nuclear Information System (INIS)

    Alderson, M.A.H.G.

    1979-07-01

    The available information on seismicity within the United Kingdom has been combined with building damage data from the United States to produce a method of estimating the probability of damage to structures due to the occurrence of earthquakes. The analysis has been based on the use of site intensity as the major damage producing parameter. Data for structural, pipework and equipment items have been assumed and the overall probability of damage calculated as a function of the design level. Due account is taken of the uncertainties of the seismic data. (author)

  15. Calculating method on human error probabilities considering influence of management and organization

    International Nuclear Information System (INIS)

    Gao Jia; Huang Xiangrui; Shen Zupei

    1996-01-01

    This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach

  16. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  17. Response and reliability analysis of nonlinear uncertain dynamical structures by the probability density evolution method

    DEFF Research Database (Denmark)

    Nielsen, Søren R. K.; Peng, Yongbo; Sichani, Mahdi Teimouri

    2016-01-01

    The paper deals with the response and reliability analysis of hysteretic or geometric nonlinear uncertain dynamical systems of arbitrary dimensionality driven by stochastic processes. The approach is based on the probability density evolution method proposed by Li and Chen (Stochastic dynamics...... of structures, 1st edn. Wiley, London, 2009; Probab Eng Mech 20(1):33–44, 2005), which circumvents the dimensional curse of traditional methods for the determination of non-stationary probability densities based on Markov process assumptions and the numerical solution of the related Fokker–Planck and Kolmogorov......–Feller equations. The main obstacle of the method is that a multi-dimensional convolution integral needs to be carried out over the sample space of a set of basic random variables, for which reason the number of these need to be relatively low. In order to handle this problem an approach is suggested, which...

  18. Estimation method for first excursion probability of secondary system with impact and friction using maximum response

    International Nuclear Information System (INIS)

    Shigeru Aoki

    2005-01-01

    The secondary system such as pipings, tanks and other mechanical equipment is installed in the primary system such as building. The important secondary systems should be designed to maintain their function even if they are subjected to destructive earthquake excitations. The secondary system has many nonlinear characteristics. Impact and friction characteristic, which are observed in mechanical supports and joints, are common nonlinear characteristics. As impact damper and friction damper, impact and friction characteristic are used for reduction of seismic response. In this paper, analytical methods of the first excursion probability of the secondary system with impact and friction, subjected to earthquake excitation are proposed. By using the methods, the effects of impact force, gap size and friction force on the first excursion probability are examined. When the tolerance level is normalized by the maximum response of the secondary system without impact or friction characteristics, variation of the first excursion probability is very small for various values of the natural period. In order to examine the effectiveness of the proposed method, the obtained results are compared with those obtained by the simulation method. Some estimation methods for the maximum response of the secondary system with nonlinear characteristics have been developed. (author)

  19. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    Science.gov (United States)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M

  20. Concise method for evaluating the probability distribution of the marginal cost of power generation

    International Nuclear Information System (INIS)

    Zhang, S.H.; Li, Y.Z.

    2000-01-01

    In the developing electricity market, many questions on electricity pricing and the risk modelling of forward contracts require the evaluation of the expected value and probability distribution of the short-run marginal cost of power generation at any given time. A concise forecasting method is provided, which is consistent with the definitions of marginal costs and the techniques of probabilistic production costing. The method embodies clear physical concepts, so that it can be easily understood theoretically and computationally realised. A numerical example has been used to test the proposed method. (author)

  1. LAMP-B: a Fortran program set for the lattice cell analysis by collision probability method

    International Nuclear Information System (INIS)

    Tsuchihashi, Keiichiro

    1979-02-01

    Nature of physical problem solved: LAMB-B solves an integral transport equation by the collision probability method for many variety of lattice cell geometries: spherical, plane and cylindrical lattice cell; square and hexagonal arrays of pin rods; annular clusters and square clusters. LAMP-B produces homogenized constants for multi and/or few group diffusion theory programs. Method of solution: LAMP-B performs an exact numerical integration to obtain the collision probabilities. Restrictions on the complexity of the problem: Not more than 68 group in the fast group calculation, and not more than 20 regions in the resonance integral calculation. Typical running time: It varies with the number of energy groups and the selection of the geometry. Unusual features of the program: Any or any combination of constituent subprograms can be used so that the partial use of this program is available. (author)

  2. A new method for evaluating the availability, reliability, and maintainability whatever may be the probability law

    International Nuclear Information System (INIS)

    Doyon, L.R.; CEA Centre d'Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette

    1975-01-01

    A simple method is presented for computer solving every system model (availability, reliability, and maintenance) with intervals between failures, and time duration for repairs distributed according to any probability law, and for any maintainance policy. A matrix equation is obtained using Markov diagrams. An example is given with the solution by the APAFS program (Algorithme Pour l'Analyse de la Fiabilite des Systemes) [fr

  3. Optimization of radiation therapy, III: a method of assessing complication probabilities from dose-volume histograms

    International Nuclear Information System (INIS)

    Lyman, J.T.; Wolbarst, A.B.

    1987-01-01

    To predict the likelihood of success of a therapeutic strategy, one must be able to assess the effects of the treatment upon both diseased and healthy tissues. This paper proposes a method for determining the probability that a healthy organ that receives a non-uniform distribution of X-irradiation, heat, chemotherapy, or other agent will escape complications. Starting with any given dose distribution, a dose-cumulative-volume histogram for the organ is generated. This is then reduced by an interpolation scheme (involving the volume-weighting of complication probabilities) to a slightly different histogram that corresponds to the same overall likelihood of complications, but which contains one less step. The procedure is repeated, one step at a time, until there remains a final, single-step histogram, for which the complication probability can be determined. The formalism makes use of a complication response function C(D, V) which, for the given treatment schedule, represents the probability of complications arising when the fraction V of the organ receives dose D and the rest of the organ gets none. Although the data required to generate this function are sparse at present, it should be possible to obtain the necessary information from in vivo and clinical studies. Volume effects are taken explicitly into account in two ways: the precise shape of the patient's histogram is employed in the calculation, and the complication response function is a function of the volume

  4. Improved method for estimating particle scattering probabilities to finite detectors for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Mickael, M.; Gardner, R.P.; Verghese, K.

    1988-01-01

    An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude

  5. Deriving the probability of a linear opinion pooling method being superior to a set of alternatives

    International Nuclear Information System (INIS)

    Bolger, Donnacha; Houlding, Brett

    2017-01-01

    Linear opinion pools are a common method for combining a set of distinct opinions into a single succinct opinion, often to be used in a decision making task. In this paper we consider a method, termed the Plug-in approach, for determining the weights to be assigned in this linear pool, in a manner that can be deemed as rational in some sense, while incorporating multiple forms of learning over time into its process. The environment that we consider is one in which every source in the pool is herself a decision maker (DM), in contrast to the more common setting in which expert judgments are amalgamated for use by a single DM. We discuss a simulation study that was conducted to show the merits of our technique, and demonstrate how theoretical probabilistic arguments can be used to exactly quantify the probability of this technique being superior (in terms of a probability density metric) to a set of alternatives. Illustrations are given of simulated proportions converging to these true probabilities in a range of commonly used distributional cases. - Highlights: • A novel context for combination of expert opinion is provided. • A dynamic reliability assessment method is stated, justified by properties and a data study. • The theoretical grounding underlying the data-driven justification is explored. • We conclude with areas for expansion and further relevant research.

  6. Understanding common statistical methods, Part I: descriptive methods, probability, and continuous data.

    Science.gov (United States)

    Skinner, Carl G; Patel, Manish M; Thomas, Jerry D; Miller, Michael A

    2011-01-01

    Statistical methods are pervasive in medical research and general medical literature. Understanding general statistical concepts will enhance our ability to critically appraise the current literature and ultimately improve the delivery of patient care. This article intends to provide an overview of the common statistical methods relevant to medicine.

  7. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  8. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5

  9. New method for extracting tumors in PET/CT images based on the probability distribution

    International Nuclear Information System (INIS)

    Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori

    2006-01-01

    In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)

  10. An Alternative Method to Compute the Bit Error Probability of Modulation Schemes Subject to Nakagami- Fading

    Directory of Open Access Journals (Sweden)

    Madeiro Francisco

    2010-01-01

    Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.

  11. Survival chance in papillary thyroid cancer in Hungary: individual survival probability estimation using the Markov method

    International Nuclear Information System (INIS)

    Esik, Olga; Tusnady, Gabor; Daubner, Kornel; Nemeth, Gyoergy; Fuezy, Marton; Szentirmay, Zoltan

    1997-01-01

    Purpose: The typically benign, but occasionally rapidly fatal clinical course of papillary thyroid cancer has raised the need for individual survival probability estimation, to tailor the treatment strategy exclusively to a given patient. Materials and methods: A retrospective study was performed on 400 papillary thyroid cancer patients with a median follow-up time of 7.1 years to establish a clinical database for uni- and multivariate analysis of the prognostic factors related to survival (Kaplan-Meier product limit method and Cox regression). For a more precise prognosis estimation, the effect of the most important clinical events were then investigated on the basis of a Markov renewal model. The basic concept of this approach is that each patient has an individual disease course which (besides the initial clinical categories) is affected by special events, e.g. internal covariates (local/regional/distant relapses). On the supposition that these events and the cause-specific death are influenced by the same biological processes, the parameters of transient survival probability characterizing the speed of the course of the disease for each clinical event and their sequence were determined. The individual survival curves for each patient were calculated by using these parameters and the independent significant clinical variables selected from multivariate studies, summation of which resulted in a mean cause-specific survival function valid for the entire group. On the basis of this Markov model, prediction of the cause-specific survival probability is possible for extrastudy cases, if it is supposed that the clinical events occur within new patients in the same manner and with the similar probability as within the study population. Results: The patient's age, a distant metastasis at presentation, the extent of the surgical intervention, the primary tumor size and extent (pT), the external irradiation dosage and the degree of TSH suppression proved to be

  12. Theory and analysis of accuracy for the method of characteristics direction probabilities with boundary averaging

    International Nuclear Information System (INIS)

    Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun

    2015-01-01

    Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy

  13. Improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell

    International Nuclear Information System (INIS)

    Bosevski, T.

    1986-01-01

    An improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell has been developed. Expanding the neutron flux and source into a series of even powers of the radius, one' gets a convenient method for integration of the one-energy group integral transport equation. It is shown that it is possible to perform an analytical integration in the x-y plane in one variable and to use the effective Gaussian integration over another one. Choosing a convenient distribution of space points in fuel and moderator the transport matrix calculation and cell reaction rate integration were condensed. On the basis of the proposed method, the computer program DISKRET for the ZUSE-Z 23 K computer has been written. The suitability of the proposed method for the calculation of the thermal-neutron-flux distribution in a reactor cell can be seen from the test results obtained. Compared with the other collision probability methods, the proposed treatment excels with a mathematical simplicity and a faster convergence. (author)

  14. A prototype method for diagnosing high ice water content probability using satellite imager data

    Science.gov (United States)

    Yost, Christopher R.; Bedka, Kristopher M.; Minnis, Patrick; Nguyen, Louis; Strapp, J. Walter; Palikonda, Rabindra; Khlopenkov, Konstantin; Spangenberg, Douglas; Smith, William L., Jr.; Protat, Alain; Delanoe, Julien

    2018-03-01

    Recent studies have found that ingestion of high mass concentrations of ice particles in regions of deep convective storms, with radar reflectivity considered safe for aircraft penetration, can adversely impact aircraft engine performance. Previous aviation industry studies have used the term high ice water content (HIWC) to define such conditions. Three airborne field campaigns were conducted in 2014 and 2015 to better understand how HIWC is distributed in deep convection, both as a function of altitude and proximity to convective updraft regions, and to facilitate development of new methods for detecting HIWC conditions, in addition to many other research and regulatory goals. This paper describes a prototype method for detecting HIWC conditions using geostationary (GEO) satellite imager data coupled with in situ total water content (TWC) observations collected during the flight campaigns. Three satellite-derived parameters were determined to be most useful for determining HIWC probability: (1) the horizontal proximity of the aircraft to the nearest overshooting convective updraft or textured anvil cloud, (2) tropopause-relative infrared brightness temperature, and (3) daytime-only cloud optical depth. Statistical fits between collocated TWC and GEO satellite parameters were used to determine the membership functions for the fuzzy logic derivation of HIWC probability. The products were demonstrated using data from several campaign flights and validated using a subset of the satellite-aircraft collocation database. The daytime HIWC probability was found to agree quite well with TWC time trends and identified extreme TWC events with high probability. Discrimination of HIWC was more challenging at night with IR-only information. The products show the greatest capability for discriminating TWC ≥ 0.5 g m-3. Product validation remains challenging due to vertical TWC uncertainties and the typically coarse spatio-temporal resolution of the GEO data.

  15. Probability of identification (POI): a statistical model for the validation of qualitative botanical identification methods

    Science.gov (United States)

    A qualitative botanical identification method (BIM) is an analytical procedure which returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) mate...

  16. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  17. Transmission probability method for solving neutron transport equation in three-dimensional triangular-z geometry

    Energy Technology Data Exchange (ETDEWEB)

    Liu Guoming [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)], E-mail: gmliusy@gmail.com; Wu Hongchun; Cao Liangzhi [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2008-09-15

    This paper presents a transmission probability method (TPM) to solve the neutron transport equation in three-dimensional triangular-z geometry. The source within the mesh is assumed to be spatially uniform and isotropic. At the mesh surface, the constant and the simplified P{sub 1} approximation are invoked for the anisotropic angular flux distribution. Based on this model, a code TPMTDT is encoded. It was verified by three 3D Takeda benchmark problems, in which the first two problems are in XYZ geometry and the last one is in hexagonal-z geometry, and an unstructured geometry problem. The results of the present method agree well with those of Monte-Carlo calculation method and Spherical Harmonics (P{sub N}) method.

  18. The Use of Conditional Probability Integral Transformation Method for Testing Accelerated Failure Time Models

    Directory of Open Access Journals (Sweden)

    Abdalla Ahmed Abdel-Ghaly

    2016-06-01

    Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.

  19. Evidence reasoning method for constructing conditional probability tables in a Bayesian network of multimorbidity.

    Science.gov (United States)

    Du, Yuanwei; Guo, Yubin

    2015-01-01

    The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.

  20. PREDICTION OF RESERVOIR FLOW RATE OF DEZ DAM BY THE PROBABILITY MATRIX METHOD

    Directory of Open Access Journals (Sweden)

    Mohammad Hashem Kanani

    2012-12-01

    Full Text Available The data collected from the operation of existing storage reservoirs, could offer valuable information for the better allocation and management of fresh water rates for future use to mitigation droughts effect. In this paper the long-term Dez reservoir (IRAN water rate prediction is presented using probability matrix method. Data is analyzed to find the probability matrix of water rates in Dez reservoir based on the previous history of annual water entrance during the past and present years(40 years. The algorithm developed covers both, the overflow and non-overflow conditions in the reservoir. Result of this study shows that in non-overflow conditions the most exigency case is equal to 75%. This means that, if the reservoir is empty (the stored water is less than 100 MCM this year, it would be also empty by 75% next year. The stored water in the reservoir would be less than 300 MCM by 85% next year if the reservoir is empty this year. This percentage decreases to 70% next year if the water of reservoir is less than 300 MCM this year. The percentage also decreases to 5% next year if the reservoir is full this year. In overflow conditions the most exigency case is equal to 75% again. The reservoir volume would be less than 150 MCM by 90% next year, if it is empty this year. This percentage decreases to 70% if its water volume is less than 300 MCM and 55% if the water volume is less than 500 MCM this year. Result shows that too, if the probability matrix of water rates to a reservoir is multiplied by itself repeatedly; it converges to a constant probability matrix, which could be used to predict the long-term water rate of the reservoir. In other words, the probability matrix of series of water rates is changed to a steady probability matrix in the course of time, which could reflect the hydrological behavior of the watershed and could be easily used for the long-term prediction of water storage in the down stream reservoirs.

  1. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    Science.gov (United States)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  2. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    Energy Technology Data Exchange (ETDEWEB)

    Wampler, William R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Myers, Samuel M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Modine, Normand A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  3. Application of perturbation theory methods to nuclear data uncertainty propagation using the collision probability method

    International Nuclear Information System (INIS)

    Sabouri, Pouya

    2013-01-01

    This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)

  4. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    International Nuclear Information System (INIS)

    Shafii, Mohammad Ali; Meidianti, Rahma; Wildian,; Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-01-01

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation

  5. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    Energy Technology Data Exchange (ETDEWEB)

    Shafii, Mohammad Ali, E-mail: mashafii@fmipa.unand.ac.id; Meidianti, Rahma, E-mail: mashafii@fmipa.unand.ac.id; Wildian,, E-mail: mashafii@fmipa.unand.ac.id; Fitriyani, Dian, E-mail: mashafii@fmipa.unand.ac.id [Department of Physics, Andalas University Padang West Sumatera Indonesia (Indonesia); Tongkukut, Seni H. J. [Department of Physics, Sam Ratulangi University Manado North Sulawesi Indonesia (Indonesia); Arkundato, Artoto [Department of Physics, Jember University Jember East Java Indonesia (Indonesia)

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  6. Transmission probability method based on triangle meshes for solving unstructured geometry neutron transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Wu Hongchun [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China)]. E-mail: hongchun@mail.xjtu.edu.cn; Liu Pingping [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China); Zhou Yongqiang [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China); Cao Liangzhi [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China)

    2007-01-15

    In the advanced reactor, the fuel assembly or core with unstructured geometry is frequently used and for calculating its fuel assembly, the transmission probability method (TPM) has been used widely. However, the rectangle or hexagon meshes are mainly used in the TPM codes for the normal core structure. The triangle meshes are most useful for expressing the complicated unstructured geometry. Even though finite element method and Monte Carlo method is very good at solving unstructured geometry problem, they are very time consuming. So we developed the TPM code based on the triangle meshes. The TPM code based on the triangle meshes was applied to the hybrid fuel geometry, and compared with the results of the MCNP code and other codes. The results of comparison were consistent with each other. The TPM with triangle meshes would thus be expected to be able to apply to the two-dimensional arbitrary fuel assembly.

  7. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    Science.gov (United States)

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  8. Chronobiological methods of human body self-regulation reserve evaluation

    Directory of Open Access Journals (Sweden)

    Sergey N. Zaguskin

    2013-05-01

    Full Text Available Aims Chronodiagnostical methods for evaluating reserve and unfavourable responses of human cardiac function and under prolonged stress load. Materials and methods 24-h ECG R–R interval recording of Holter-monitoring ECG recording and 1-h IPI and RespI recordings of healthy young and elderly subjects, post- MI patients, subjects suffered from chronic cerebral ischemia leading to a cognitive decline, healthy subjects following post-stress load, as well as R– R intervals recordings of the AHA ECG database of heart failure and AF. Chronodiagnostics, using non-linear symbolic dynamics method and redundancy quotient of ECG PI, RespI and R– R intervals; differential temperature survey to evaluate cellular immunity; biocontrolled laser therapy. Results Self-regulation reserve reduction of oxygen transfer body systems and increase in unfavourable response probability under stress load are accompanied by the amplitude and fluctuation increase of redundancy quotient in the ECG IPI, RespI and R–R intervals, as well as increase of hierarchical desynchronosis with dominating sympathicotonia and vagotonia, decrease in cellular immunity, reduction in rate spectrum of the ECG IPI and R–R intervals. Conclusion Symbolic dynamics method provides distinction between age-related and abnormal changes in hierarchy of cardiac rhythms. The amplitude and fluctuation increase of redundancy quotient indicates the increase of control intensity with oxygen transfer body systems and predicts the reduction of self-regulation reserve in cardiac rhythms and unfavourable response probability.

  9. A comparison of Probability Of Detection (POD) data determined using different statistical methods

    Science.gov (United States)

    Fahr, A.; Forsyth, D.; Bullock, M.

    1993-12-01

    Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.

  10. Collision probability method for discrete presentation of space in cylindrical cell

    International Nuclear Information System (INIS)

    Bosevski, T.

    1969-08-01

    A suitable numerical method for integration of one-group integral transport equation is obtained by series expansion of flux and neutron source by radius squared, when calculating the parameters of cylindrically symmetric reactor cell. Separation of variables in (x,y) plane enables analytical integration in one direction and efficient Gauss quadrature formula in the second direction. White boundary condition is used for determining the neutron balance. Suitable choice of spatial points distribution in the fuel and moderator condenses the procedure for determining the transport matrix and accelerates the convergence when calculating the absorption in the reactor cell. In comparison to other collision probability methods the proposed procedure is a simple mathematical model which demands smaller computer capacity and shorter computing time

  11. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  12. Inverse probability weighting in STI/HIV prevention research: methods for evaluating social and community interventions

    Science.gov (United States)

    Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.

    2011-01-01

    Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927

  13. Tracer diffusion in an ordered alloy: application of the path probability and Monte Carlo methods

    International Nuclear Information System (INIS)

    Sato, Hiroshi; Akbar, S.A.; Murch, G.E.

    1984-01-01

    Tracer diffusion technique has been extensively utilized to investigate diffusion phenomena and has contributed a great deal to the understanding of the phenomena. However, except for self diffusion and impurity diffusion, the meaning of tracer diffusion is not yet satisfactorily understood. Here we try to extend the understanding to concentrated alloys. Our major interest here is directed towards understanding the physical factors which control diffusion through the comparison of results obtained by the Path Probability Method (PPM) and those by the Monte Carlo simulation method (MCSM). Both the PPM and the MCSM are basically in the same category of statistical mechanical approaches applicable to random processes. The advantage of the Path Probability method in dealing with phenomena which occur in crystalline systems has been well established. However, the approximations which are inevitably introduced to make the analytical treatment tractable, although their meaning may be well-established in equilibrium statistical mechanics, sometimes introduce unwarranted consequences the origin of which is often hard to trace. On the other hand, the MCSM which can be carried out in a parallel fashion to the PPM provides, with care, numerically exact results. Thus a side-by-side comparison can give insight into the effect of approximations in the PPM. It was found that in the pair approximation of the CVM, the distribution in the completely random state is regarded as homogeneous (without fluctuations), and hence, the fluctuation in distribution is not well represented in the PPM. These examples thus show clearly how the comparison of analytical results with carefully carried out calculations by the MCSM guides the progress of theoretical treatments and gives insights into the mechanism of diffusion

  14. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  15. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  16. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    Science.gov (United States)

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  17. Probability-Based Determination Methods for Service Waiting in Service-Oriented Computing Environments

    Science.gov (United States)

    Zeng, Sen; Huang, Shuangxi; Liu, Yang

    Cooperative business processes (CBP)-based service-oriented enterprise networks (SOEN) are emerging with the significant advances of enterprise integration and service-oriented architecture. The performance prediction and optimization for CBP-based SOEN is very complex. To meet these challenges, one of the key points is to try to reduce an abstract service’s waiting number of its physical services. This paper introduces a probability-based determination method (PBDM) of an abstract service’ waiting number, M l , and time span, τ i , for its physical services. The determination of M i and τ i is according to the physical services’ arriving rule and their overall performance’s distribution functions. In PBDM, the arriving probability of the physical services with the best overall performance value is a pre-defined reliability. PBDM has made use of the information of the physical services’ arriving rule and performance distribution functions thoroughly, which will improve the computational efficiency for the scheme design and performance optimization of the collaborative business processes in service-oriented computing environments.

  18. An Alternative Teaching Method of Conditional Probabilities and Bayes' Rule: An Application of the Truth Table

    Science.gov (United States)

    Satake, Eiki; Vashlishan Murray, Amy

    2015-01-01

    This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…

  19. Optimum Inductive Methods. A study in Inductive Probability, Bayesian Statistics, and Verisimilitude.

    NARCIS (Netherlands)

    Festa, Roberto

    1992-01-01

    According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important

  20. To Measure Probable Physical Changes On The Earth During Total Solar Eclipse Using Geophysical Methods

    International Nuclear Information System (INIS)

    Gocmen, C.

    2007-01-01

    When the total solar eclipse came into question, people connected the eclipse with the earthquake dated 17.08.1999. We thought if any physical parameters change during total solar eclipse on the earth, we could measure this changing and we did the project 'To Measure Probable Physical Changes On The Earth During Total Solar Eclipse Using Geophysical Methods' We did gravity, magnetic and self-potential measurements at Konya and Ankara during total solar eclipse (29, March, 2006) and the day before eclipse and the day after eclipse. The measurements went on three days continuously twenty-four hours at Konya and daytime in Ankara. Bogazici University Kandilli Observatory gave us magnetic values in Istanbul and we compare the values with our magnetic values. Turkish State Meteorological Service sent us temperature and air pressure observations during three days, in Konya and Ankara. We interpreted all of them

  1. Use of probabilistic methods for estimating failure probabilities and directing ISI-efforts

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, F; Brickstad, B [University of Uppsala, (Switzerland)

    1988-12-31

    Some general aspects of the role of Non Destructive Testing (NDT) efforts on the resulting probability of core damage is discussed. A simple model for the estimation of the pipe break probability due to IGSCC is discussed. It is partly based on analytical procedures, partly on service experience from the Swedish BWR program. Estimates of the break probabilities indicate that further studies are urgently needed. It is found that the uncertainties about the initial crack configuration are large contributors to the total uncertainty. Some effects of the inservice inspection are studied and it is found that the detection probabilities influence the failure probabilities. (authors).

  2. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    Science.gov (United States)

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  3. Refinement of a Method for Identifying Probable Archaeological Sites from Remotely Sensed Data

    Science.gov (United States)

    Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel; Chen, Li

    2012-01-01

    To facilitate locating archaeological sites before they are compromised or destroyed, we are developing approaches for generating maps of probable archaeological sites, through detecting subtle anomalies in vegetative cover, soil chemistry, and soil moisture by analyzing remotely sensed data from multiple sources. We previously reported some success in this effort with a statistical analysis of slope, radar, and Ikonos data (including tasseled cap and NDVI transforms) with Student's t-test. We report here on new developments in our work, performing an analysis of 8-band multispectral Worldview-2 data. The Worldview-2 analysis begins by computing medians and median absolute deviations for the pixels in various annuli around each site of interest on the 28 band difference ratios. We then use principle components analysis followed by linear discriminant analysis to train a classifier which assigns a posterior probability that a location is an archaeological site. We tested the procedure using leave-one-out cross validation with a second leave-one-out step to choose parameters on a 9,859x23,000 subset of the WorldView-2 data over the western portion of Ft. Irwin, CA, USA. We used 100 known non-sites and trained one classifier for lithic sites (n=33) and one classifier for habitation sites (n=16). We then analyzed convex combinations of scores from the Archaeological Predictive Model (APM) and our scores. We found that that the combined scores had a higher area under the ROC curve than either individual method, indicating that including WorldView-2 data in analysis improved the predictive power of the provided APM.

  4. Method for assessing the probability of accumulated doses from an intermittent source using the convolution technique

    International Nuclear Information System (INIS)

    Coleman, J.H.

    1980-10-01

    A technique is discussed for computing the probability distribution of the accumulated dose received by an arbitrary receptor resulting from several single releases from an intermittent source. The probability density of the accumulated dose is the convolution of the probability densities of doses from the intermittent releases. Emissions are not assumed to be constant over the brief release period. The fast fourier transform is used in the calculation of the convolution

  5. Numerical Simulation of Tubular Pumping Systems with Different Regulation Methods

    Science.gov (United States)

    Zhu, Honggeng; Zhang, Rentian; Deng, Dongsheng; Feng, Xusong; Yao, Linbi

    2010-06-01

    Since the flow in tubular pumping systems is basically along axial direction and passes symmetrically through the impeller, most satisfying the basic hypotheses in the design of impeller and having higher pumping system efficiency in comparison with vertical pumping system, they are being widely applied to low-head pumping engineering. In a pumping station, the fluctuation of water levels in the sump and discharge pool is most common and at most time the pumping system runs under off-design conditions. Hence, the operation of pump has to be flexibly regulated to meet the needs of flow rates, and the selection of regulation method is as important as that of pump to reduce operation cost and achieve economic operation. In this paper, the three dimensional time-averaged Navier-Stokes equations are closed by RNG κ-ɛ turbulent model, and two tubular pumping systems with different regulation methods, equipped with the same pump model but with different designed system structures, are numerically simulated respectively to predict the pumping system performances and analyze the influence of regulation device and help designers make final decision in the selection of design schemes. The computed results indicate that the pumping system with blade-adjusting device needs longer suction box, and the increased hydraulic loss will lower the pumping system efficiency in the order of 1.5%. The pumping system with permanent magnet motor, by means of variable speed regulation, obtains higher system efficiency partly for shorter suction box and partly for different structure design. Nowadays, the varied speed regulation is realized by varied frequency device, the energy consumption of which is about 3˜4% of output power of the motor. Hence, when the efficiency of variable frequency device is considered, the total pumping system efficiency will probably be lower.

  6. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  7. Exact asymptotics of probabilities of large deviations for Markov chains: the Laplace method

    Energy Technology Data Exchange (ETDEWEB)

    Fatalov, Vadim R [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2011-08-31

    We prove results on exact asymptotics as n{yields}{infinity} for the expectations E{sub a} exp{l_brace}-{theta}{Sigma}{sub k=0}{sup n-1}g(X{sub k}){r_brace} and probabilities P{sub a}{l_brace}(1/n {Sigma}{sub k=0}{sup n-1}g(X{sub k})=}1, is the corresponding random walk on R, g(x) is a positive continuous function satisfying certain conditions, and d>0, {theta}>0, a element of R are fixed numbers. Our results are obtained using a new method which is developed in this paper: the Laplace method for the occupation time of discrete-time Markov chains. For g(x) one can take |x|{sup p}, log (|x|{sup p}+1), p>0, |x| log (|x|+1), or e{sup {alpha}|x|}-1, 0<{alpha}<1/2, x element of R, for example. We give a detailed treatment of the case when g(x)=|x| using Bessel functions to make explicit calculations.

  8. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

    Science.gov (United States)

    Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra

    2014-06-01

    In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in

  9. New results to BDD truncation method for efficient top event probability calculation

    International Nuclear Information System (INIS)

    Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang

    2012-01-01

    A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.

  10. Approximate solutions of the two-dimensional integral transport equation by collision probability methods

    International Nuclear Information System (INIS)

    Sanchez, Richard

    1977-01-01

    A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the Interface Current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding and water, or homogenized structural material. The cells are divided into zones which are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is made by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: the first uses a cylindrical cell model and one or three terms for the flux expansion; the second uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark pr

  11. Estimating factors influencing the detection probability of semiaquatic freshwater snails using quadrat survey methods

    Science.gov (United States)

    Roesler, Elizabeth L.; Grabowski, Timothy B.

    2018-01-01

    Developing effective monitoring methods for elusive, rare, or patchily distributed species requires extra considerations, such as imperfect detection. Although detection is frequently modeled, the opportunity to assess it empirically is rare, particularly for imperiled species. We used Pecos assiminea (Assiminea pecos), an endangered semiaquatic snail, as a case study to test detection and accuracy issues surrounding quadrat searches. Quadrats (9 × 20 cm; n = 12) were placed in suitable Pecos assiminea habitat and randomly assigned a treatment, defined as the number of empty snail shells (0, 3, 6, or 9). Ten observers rotated through each quadrat, conducting 5-min visual searches for shells. The probability of detecting a shell when present was 67.4 ± 3.0%, but it decreased with the increasing litter depth and fewer number of shells present. The mean (± SE) observer accuracy was 25.5 ± 4.3%. Accuracy was positively correlated to the number of shells in the quadrat and negatively correlated to the number of times a quadrat was searched. The results indicate quadrat surveys likely underrepresent true abundance, but accurately determine the presence or absence. Understanding detection and accuracy of elusive, rare, or imperiled species improves density estimates and aids in monitoring and conservation efforts.

  12. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    International Nuclear Information System (INIS)

    Bakosi, Jozsef; Ristorcelli, Raymond J.

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  13. Neutron Flux Interpolation with Finite Element Method in the Nuclear Fuel Cell Calculation using Collision Probability Method

    International Nuclear Information System (INIS)

    Shafii, M. Ali; Su'ud, Zaki; Waris, Abdul; Kurniasih, Neny; Ariani, Menik; Yulianti, Yanti

    2010-01-01

    Nuclear reactor design and analysis of next-generation reactors require a comprehensive computing which is better to be executed in a high performance computing. Flat flux (FF) approach is a common approach in solving an integral transport equation with collision probability (CP) method. In fact, the neutron flux distribution is not flat, even though the neutron cross section is assumed to be equal in all regions and the neutron source is uniform throughout the nuclear fuel cell. In non-flat flux (NFF) approach, the distribution of neutrons in each region will be different depending on the desired interpolation model selection. In this study, the linear interpolation using Finite Element Method (FEM) has been carried out to be treated the neutron distribution. The CP method is compatible to solve the neutron transport equation for cylindrical geometry, because the angle integration can be done analytically. Distribution of neutrons in each region of can be explained by the NFF approach with FEM and the calculation results are in a good agreement with the result from the SRAC code. In this study, the effects of the mesh on the k eff and other parameters are investigated.

  14. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    Science.gov (United States)

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  15. Resveratrol enhances airway surface liquid depth in sinonasal epithelium by increasing cystic fibrosis transmembrane conductance regulator open probability.

    Directory of Open Access Journals (Sweden)

    Shaoyan Zhang

    Full Text Available Chronic rhinosinusitis engenders enormous morbidity in the general population, and is often refractory to medical intervention. Compounds that augment mucociliary clearance in airway epithelia represent a novel treatment strategy for diseases of mucus stasis. A dominant fluid and electrolyte secretory pathway in the nasal airways is governed by the cystic fibrosis transmembrane conductance regulator (CFTR. The objectives of the present study were to test resveratrol, a strong potentiator of CFTR channel open probability, in preparation for a clinical trial of mucociliary activators in human sinus disease.Primary sinonasal epithelial cells, immortalized bronchoepithelial cells (wild type and F508del CFTR, and HEK293 cells expressing exogenous human CFTR were investigated by Ussing chamber as well as patch clamp technique under non-phosphorylating conditions. Effects on airway surface liquid depth were measured using confocal laser scanning microscopy. Impact on CFTR gene expression was measured by quantitative reverse transcriptase polymerase chain reaction.Resveratrol is a robust CFTR channel potentiator in numerous mammalian species. The compound also activated temperature corrected F508del CFTR and enhanced CFTR-dependent chloride secretion in human sinus epithelium ex vivo to an extent comparable to the recently approved CFTR potentiator, ivacaftor. Using inside out patches from apical membranes of murine cells, resveratrol stimulated an ~8 picosiemens chloride channel consistent with CFTR. This observation was confirmed in HEK293 cells expressing exogenous CFTR. Treatment of sinonasal epithelium resulted in a significant increase in airway surface liquid depth (in µm: 8.08+/-1.68 vs. 6.11+/-0.47,control,p<0.05. There was no increase CFTR mRNA.Resveratrol is a potent chloride secretagogue from the mucosal surface of sinonasal epithelium, and hydrates airway surface liquid by increasing CFTR channel open probability. The foundation for a

  16. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  17. Assessment of climate change using methods of mathematic statistics and theory of probability

    International Nuclear Information System (INIS)

    Trajanoska, Lidija; Kaevski, Ivancho

    2004-01-01

    In simple terms: 'Climate' is the average of 'weather'. The Earth's weather system is a complex machine composed of coupled sub-systems (ocean, air, land, ice and the biosphere) between which energy are exchanged. The understanding and study of climate change does not only rely on the understanding of the physics of climate change but is linked to the following question: 'How we can detect change in a system that is changing all the time under its own volition'? What is even the meaning of 'change' in such a situation? The concept of 'change' we should transform into the concept of 'significant and long-term' then this re-phrasing allows for a definition in mathematical terms. Significant change in a system becomes a measure of how large an observed change is in terms of the variability one would see under 'normal' conditions. Example could be the analyses of the yearly temperature of the air and precipitations, like in this paper. A large amount of data are selected as representing the 'before' case (change) and another set of data are selected as being the 'after' case and then the average in these two cases are compared. These comparisons are in the form of 'hypothesis tests' in which one tests whether the hypothesis that there has Open no change can be rejected. Both parameter and nonparametric statistic methods are used in the theory of mathematic statistic. The most indicative changeable which show global change is an average, standard deviation and probability function distribution on examined time series. Examined meteorological series are taken like haphazard process so we can mathematic statistic applied.(Author)

  18. A bioinformatic survey of distribution, conservation, and probable functions of LuxR solo regulators in bacteria

    Science.gov (United States)

    Subramoni, Sujatha; Florez Salcedo, Diana Vanessa; Suarez-Moreno, Zulma R.

    2015-01-01

    LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal) and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal), but are not associated with a cognate N-acyl homoserine lactone (AHL) synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs) available in the InterPro database (IPR005143), and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information. PMID:25759807

  19. A bioinformatic survey of distribution, conservation, and probable functions of LuxR solo regulators in bacteria.

    Science.gov (United States)

    Subramoni, Sujatha; Florez Salcedo, Diana Vanessa; Suarez-Moreno, Zulma R

    2015-01-01

    LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal) and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal), but are not associated with a cognate N-acyl homoserine lactone (AHL) synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs) available in the InterPro database (IPR005143), and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information.

  20. A bioinformatic survey of distribution, conservation and probable functions of LuxR solo regulators in bacteria

    Directory of Open Access Journals (Sweden)

    Sujatha eSubramoni

    2015-02-01

    Full Text Available LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal, but are not associated with a cognate N-acyl homoserine lactone (AHL synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs available in the InterPro database (IPR005143, and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information.

  1. A probability evaluation method of early deterioration condition for the critical components of wind turbine generator systems

    DEFF Research Database (Denmark)

    Hu, Y.; Li, H.; Liao, X

    2016-01-01

    method of early deterioration condition for critical components based only on temperature characteristic parameters. First, the dynamic threshold of deterioration degree function was proposed by analyzing the operational data between temperature and rotor speed. Second, a probability evaluation method...... of early deterioration condition was presented. Finally, two cases showed the validity of the proposed probability evaluation method in detecting early deterioration condition and in tracking their further deterioration for the critical components.......This study determines the early deterioration condition of critical components for a wind turbine generator system (WTGS). Due to the uncertainty nature of the fluctuation and intermittence of wind, early deterioration condition evaluation poses a challenge to the traditional vibration...

  2. Probability of Detection Study to Assess the Performance of Nondestructive Inspection Methods for Wind Turbine Blades.

    Energy Technology Data Exchange (ETDEWEB)

    Roach, Dennis P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rice, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Paquette, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Wind turbine blades pose a unique set of inspection challenges that span from very thick and attentive spar cap structures to porous bond lines, varying core material and a multitude of manufacturing defects of interest. The need for viable, accurate nondestructive inspection (NDI) technology becomes more important as the cost per blade, and lost revenue from downtime, grows. NDI methods must not only be able to contend with the challenges associated with inspecting extremely thick composite laminates and subsurface bond lines, but must also address new inspection requirements stemming from the growing understanding of blade structural aging phenomena. Under its Blade Reliability Collaborative program, Sandia Labs quantitatively assessed the performance of a wide range of NDI methods that are candidates for wind blade inspections. Custom wind turbine blade test specimens, containing engineered defects, were used to determine critical aspects of NDI performance including sensitivity, accuracy, repeatability, speed of inspection coverage, and ease of equipment deployment. The detection of fabrication defects helps enhance plant reliability and increase blade life while improved inspection of operating blades can result in efficient blade maintenance, facilitate repairs before critical damage levels are reached and minimize turbine downtime. The Sandia Wind Blade Flaw Detection Experiment was completed to evaluate different NDI methods that have demonstrated promise for interrogating wind blades for manufacturing flaws or in-service damage. These tests provided the Probability of Detection information needed to generate industry-wide performance curves that quantify: 1) how well current inspection techniques are able to reliably find flaws in wind turbine blades (industry baseline) and 2) the degree of improvements possible through integrating more advanced NDI techniques and procedures. _____________ S a n d i a N a t i o n a l L a b o r a t o r i e s i s a m u l t i

  3. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    Science.gov (United States)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  4. The probability estimate of the defects of the asynchronous motors based on the complex method of diagnostics

    Science.gov (United States)

    Zhukovskiy, Yu L.; Korolev, N. A.; Babanova, I. S.; Boikov, A. V.

    2017-10-01

    This article is devoted to the development of a method for probability estimate of failure of an asynchronous motor as a part of electric drive with a frequency converter. The proposed method is based on a comprehensive method of diagnostics of vibration and electrical characteristics that take into account the quality of the supply network and the operating conditions. The developed diagnostic system allows to increase the accuracy and quality of diagnoses by determining the probability of failure-free operation of the electromechanical equipment, when the parameters deviate from the norm. This system uses an artificial neural networks (ANNs). The results of the system for estimator the technical condition are probability diagrams of the technical state and quantitative evaluation of the defects of the asynchronous motor and its components.

  5. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  6. Methods for estimating the probability of cancer from occupational radiation exposure

    International Nuclear Information System (INIS)

    1996-04-01

    The aims of this TECDOC are to present the factors which are generally accepted as being responsible for cancer induction, to examine the role of radiation as a carcinogen, to demonstrate how the probability of cancer causation by radiation may be calculated and to inform the reader of the uncertainties that are associated with the use of various risk factors and models in such calculations. 139 refs, 2 tabs

  7. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  8. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    Science.gov (United States)

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured

  9. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  10. α-Cut method based importance measure for criticality analysis in fuzzy probability – Based fault tree analysis

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry; Sony Tjahyani, D.T.; Widodo, Surip; Tjahjono, Hendro

    2017-01-01

    Highlights: •FPFTA deals with epistemic uncertainty using fuzzy probability. •Criticality analysis is important for reliability improvement. •An α-cut method based importance measure is proposed for criticality analysis in FPFTA. •The α-cut method based importance measure utilises α-cut multiplication, α-cut subtraction, and area defuzzification technique. •Benchmarking confirm that the proposed method is feasible for criticality analysis in FPFTA. -- Abstract: Fuzzy probability – based fault tree analysis (FPFTA) has been recently developed and proposed to deal with the limitations of conventional fault tree analysis. In FPFTA, reliabilities of basic events, intermediate events and top event are characterized by fuzzy probabilities. Furthermore, the quantification of the FPFTA is based on fuzzy multiplication rule and fuzzy complementation rule to propagate uncertainties from basic event to the top event. Since the objective of the fault tree analysis is to improve the reliability of the system being evaluated, it is necessary to find the weakest path in the system. For this purpose, criticality analysis can be implemented. Various importance measures, which are based on conventional probabilities, have been developed and proposed for criticality analysis in fault tree analysis. However, not one of those importance measures can be applied for criticality analysis in FPFTA, which is based on fuzzy probability. To be fully applied in nuclear power plant probabilistic safety assessment, FPFTA needs to have its corresponding importance measure. The objective of this study is to develop an α-cut method based importance measure to evaluate and rank the importance of basic events for criticality analysis in FPFTA. To demonstrate the applicability of the proposed measure, a case study is performed and its results are then benchmarked to the results generated by the four well known importance measures in conventional fault tree analysis. The results

  11. a Probability Model for Drought Prediction Using Fusion of Markov Chain and SAX Methods

    Science.gov (United States)

    Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.

    2017-09-01

    Drought is one of the most powerful natural disasters which are affected on different aspects of the environment. Most of the time this phenomenon is immense in the arid and semi-arid area. Monitoring and prediction the severity of the drought can be useful in the management of the natural disaster caused by drought. Many indices were used in predicting droughts such as SPI, VCI, and TVX. In this paper, based on three data sets (rainfall, NDVI, and land surface temperature) which are acquired from MODIS satellite imagery, time series of SPI, VCI, and TVX in time limited between winters 2000 to summer 2015 for the east region of Isfahan province were created. Using these indices and fusion of symbolic aggregation approximation and hidden Markov chain drought was predicted for fall 2015. For this purpose, at first, each time series was transformed into the set of quality data based on the state of drought (5 group) by using SAX algorithm then the probability matrix for the future state was created by using Markov hidden chain. The fall drought severity was predicted by fusion the probability matrix and state of drought severity in summer 2015. The prediction based on the likelihood for each state of drought includes severe drought, middle drought, normal drought, severe wet and middle wet. The analysis and experimental result from proposed algorithm show that the product of this algorithm is acceptable and the proposed algorithm is appropriate and efficient for predicting drought using remote sensor data.

  12. Physical method to assess a probable maximum precipitation, using CRCM datas

    International Nuclear Information System (INIS)

    Beauchamp, J.

    2009-01-01

    'Full text:' For Nordic hydropower facilities, spillways are designed with a peak discharge based on extreme conditions. This peak discharge is generally derived using the concept of a probable maximum flood (PMF), which results from the combined effect of abundant downpours (probable maximum precipitation - PMP) and rapid snowmelt. On a gauged basin, the weather data record allows for the computation of the PMF. However, uncertainty in the future climate raises questions as to the accuracy of current PMP estimates for existing and future hydropower facilities. This project looks at the potential use of the Canadian Regional Climate Model (CRCM) data to compute the PMF in ungauged basins and to assess potential changes to the PMF in a changing climate. Several steps will be needed to accomplish this task. This paper presents the first step that aims at applying/adapting to CRCM data the in situ moisture maximization technique developed by the World Meteorological Organization, in order to compute the PMP at the watershed scale. The CRCM provides output data on a 45km grid at a six hour time step. All of the needed atmospheric data is available at sixteen different pressure levels. The methodology consists in first identifying extreme precipitation events under current climate conditions. Then, a maximum persisting twelve hours dew point is determined at each grid point and pressure level for the storm duration. Afterwards, the maximization ratio is approximated by merging the effective temperature with dew point and relative humidity values. The variables and maximization ratio are four-dimensional (x, y, z, t) values. Consequently, two different approaches are explored: a partial ratio at each step and a global ratio for the storm duration. For every identified extreme precipitation event, a maximized hyetograph is computed from the application of this ratio, either partial or global, on CRCM precipitation rates. Ultimately, the PMP is the depth of the

  13. Physical method to assess a probable maximum precipitation, using CRCM datas

    Energy Technology Data Exchange (ETDEWEB)

    Beauchamp, J. [Univ. de Quebec, Ecole de technologie superior, Quebec (Canada)

    2009-07-01

    'Full text:' For Nordic hydropower facilities, spillways are designed with a peak discharge based on extreme conditions. This peak discharge is generally derived using the concept of a probable maximum flood (PMF), which results from the combined effect of abundant downpours (probable maximum precipitation - PMP) and rapid snowmelt. On a gauged basin, the weather data record allows for the computation of the PMF. However, uncertainty in the future climate raises questions as to the accuracy of current PMP estimates for existing and future hydropower facilities. This project looks at the potential use of the Canadian Regional Climate Model (CRCM) data to compute the PMF in ungauged basins and to assess potential changes to the PMF in a changing climate. Several steps will be needed to accomplish this task. This paper presents the first step that aims at applying/adapting to CRCM data the in situ moisture maximization technique developed by the World Meteorological Organization, in order to compute the PMP at the watershed scale. The CRCM provides output data on a 45km grid at a six hour time step. All of the needed atmospheric data is available at sixteen different pressure levels. The methodology consists in first identifying extreme precipitation events under current climate conditions. Then, a maximum persisting twelve hours dew point is determined at each grid point and pressure level for the storm duration. Afterwards, the maximization ratio is approximated by merging the effective temperature with dew point and relative humidity values. The variables and maximization ratio are four-dimensional (x, y, z, t) values. Consequently, two different approaches are explored: a partial ratio at each step and a global ratio for the storm duration. For every identified extreme precipitation event, a maximized hyetograph is computed from the application of this ratio, either partial or global, on CRCM precipitation rates. Ultimately, the PMP is the depth of the

  14. Collision probability in two-dimensional lattice by ray-trace method and its applications to cell calculations

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro

    1985-03-01

    A series of formulations to evaluate collision probability for multi-region cells expressed by either of three one-dimensional coordinate systems (plane, sphere and cylinder) or by the general two-dimensional cylindrical coordinate system is presented. They are expressed in a suitable form to have a common numerical process named ''Ray-Trace'' method. Applications of the collision probability method to two optional treatments for the resonance absorption are presented. One is a modified table-look-up method based on the intermediate resonance approximation, and the other is a rigorous method to calculate the resonance absorption in a multi-region cell in which nearly continuous energy spectra of the resonance neutron range can be solved and interaction effect between different resonance nuclides can be evaluated. Two works on resonance absorption in a doubly heterogeneous system with grain structure are presented. First, the effect of a random distribution of particles embedded in graphite diluent on the resonance integral is studied. Next, the ''Accretion'' method proposed by Leslie and Jonsson to define the collision probability in a doubly heterogeneous system is applied to evaluate the resonance absorption in coated particles dispersed in fuel pellet of the HTGR. Several optional models are proposed to define the collision rates in the medium with the microscopic heterogeneity. By making use of the collision probability method developed by the present study, the JAERI thermal reactor standard nuclear design code system SRAC has been developed. Results of several benchmark tests for the SRAC are presented. The analyses of critical experiments of the SHE, DCA, and FNR show good agreement of critical masses with their experimental values. (J.P.N.)

  15. Method for Evaluation of Outage Probability on Random Access Channel in Mobile Communication Systems

    Science.gov (United States)

    Kollár, Martin

    2012-05-01

    In order to access the cell in all mobile communication technologies a so called random-access procedure is used. For example in GSM this is represented by sending the CHANNEL REQUEST message from Mobile Station (MS) to Base Transceiver Station (BTS) which is consequently forwarded as an CHANNEL REQUIRED message to the Base Station Controller (BSC). If the BTS decodes some noise on the Random Access Channel (RACH) as random access by mistake (so- called ‘phantom RACH') then it is a question of pure coincidence which èstablishment cause’ the BTS thinks to have recognized. A typical invalid channel access request or phantom RACH is characterized by an IMMEDIATE ASSIGNMENT procedure (assignment of an SDCCH or TCH) which is not followed by sending an ESTABLISH INDICATION from MS to BTS. In this paper a mathematical model for evaluation of the Power RACH Busy Threshold (RACHBT) in order to guaranty in advance determined outage probability on RACH is described and discussed as well. It focuses on Global System for Mobile Communications (GSM) however the obtained results can be generalized on remaining mobile technologies (ie WCDMA and LTE).

  16. Burst suppression probability algorithms: state-space methods for tracking EEG burst suppression

    Science.gov (United States)

    Chemali, Jessica; Ching, ShiNung; Purdon, Patrick L.; Solt, Ken; Brown, Emery N.

    2013-10-01

    Objective. Burst suppression is an electroencephalogram pattern in which bursts of electrical activity alternate with an isoelectric state. This pattern is commonly seen in states of severely reduced brain activity such as profound general anesthesia, anoxic brain injuries, hypothermia and certain developmental disorders. Devising accurate, reliable ways to quantify burst suppression is an important clinical and research problem. Although thresholding and segmentation algorithms readily identify burst suppression periods, analysis algorithms require long intervals of data to characterize burst suppression at a given time and provide no framework for statistical inference. Approach. We introduce the concept of the burst suppression probability (BSP) to define the brain's instantaneous propensity of being in the suppressed state. To conduct dynamic analyses of burst suppression we propose a state-space model in which the observation process is a binomial model and the state equation is a Gaussian random walk. We estimate the model using an approximate expectation maximization algorithm and illustrate its application in the analysis of rodent burst suppression recordings under general anesthesia and a patient during induction of controlled hypothermia. Main result. The BSP algorithms track burst suppression on a second-to-second time scale, and make possible formal statistical comparisons of burst suppression at different times. Significance. The state-space approach suggests a principled and informative way to analyze burst suppression that can be used to monitor, and eventually to control, the brain states of patients in the operating room and in the intensive care unit.

  17. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    Science.gov (United States)

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  18. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    OpenAIRE

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective: To examine sociodemographic and behavioural differences between men whohave sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey.\\ud Methods: We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men inthe same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European...

  19. Utilization of transmission probabilities in the calculation of unit-cell by the interface-current method

    International Nuclear Information System (INIS)

    Queiroz Bogado Leite, S. de.

    1989-10-01

    A widely used but otherwise physically incorrect assumption in unit-cell calculations by the method of interface currents in cylindrical or spherical geometries, is that of that of isotropic fluxes at the surfaces of the cell annular regions, when computing transmission probabilities. In this work, new interface-current relations are developed without making use of this assumption and the effects on calculated integral parameters are shown for an idealized unit-cell example. (author) [pt

  20. Incorporation of Collision Probability Method in STREAM to Consider Non-uniform Material Composition in Fuel Subregions

    International Nuclear Information System (INIS)

    Choi, Sooyoung; Choe, Jiwon; Lee, Deokjung

    2016-01-01

    STREAM uses a pin-based slowing-down method (PSM) which solves pointwise energy slowing-down problems with sub-divided fuel pellet, and shows a great performance in calculating effective cross-section (XS). Various issues in the conventional resonance treatment methods (i.e., approximations on resonance scattering source, resonance interference effect, and intrapellet self-shielding effect) were successfully resolved by PSM. PSM assumes that a fuel rod has a uniform material composition and temperature even though PSM calculates spatially dependent effective XSs of fuel subregions. When the depletion calculation or thermal/hydraulic (T/H) coupling are performed with sub-divided material meshes, each subregion has its own material condition depending on position. It was reported that the treatment of distributed temperature is important to calculate an accurate fuel temperature coefficient (FTC). In order to avoid the approximation in PSM, the collision probability method (CPM) has been incorporated as a calculation option. The resonance treatment method, PSM, used in the transport code STREAM has been enhanced to accurately consider a non-uniform material condition. The method incorporates CPM in computing collision probability of isolated fuel pin. From numerical tests with pin-cell problems, STREAM with the method showed very accurate multiplication factor and FTC results less than 83 pcm and 1.43 % differences from the references, respectively. The original PSM showed larger differences than the proposed method but still has a high accuracy

  1. Incorporation of Collision Probability Method in STREAM to Consider Non-uniform Material Composition in Fuel Subregions

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sooyoung; Choe, Jiwon; Lee, Deokjung [UNIST, Ulsan (Korea, Republic of)

    2016-10-15

    STREAM uses a pin-based slowing-down method (PSM) which solves pointwise energy slowing-down problems with sub-divided fuel pellet, and shows a great performance in calculating effective cross-section (XS). Various issues in the conventional resonance treatment methods (i.e., approximations on resonance scattering source, resonance interference effect, and intrapellet self-shielding effect) were successfully resolved by PSM. PSM assumes that a fuel rod has a uniform material composition and temperature even though PSM calculates spatially dependent effective XSs of fuel subregions. When the depletion calculation or thermal/hydraulic (T/H) coupling are performed with sub-divided material meshes, each subregion has its own material condition depending on position. It was reported that the treatment of distributed temperature is important to calculate an accurate fuel temperature coefficient (FTC). In order to avoid the approximation in PSM, the collision probability method (CPM) has been incorporated as a calculation option. The resonance treatment method, PSM, used in the transport code STREAM has been enhanced to accurately consider a non-uniform material condition. The method incorporates CPM in computing collision probability of isolated fuel pin. From numerical tests with pin-cell problems, STREAM with the method showed very accurate multiplication factor and FTC results less than 83 pcm and 1.43 % differences from the references, respectively. The original PSM showed larger differences than the proposed method but still has a high accuracy.

  2. Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation

    Energy Technology Data Exchange (ETDEWEB)

    Barajas-Solano, David A.; Tartakovsky, Alexandre M.

    2018-01-01

    We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advective dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.

  3. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    Science.gov (United States)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  4. How Do High School Students Solve Probability Problems? A Mixed Methods Study on Probabilistic Reasoning

    Science.gov (United States)

    Heyvaert, Mieke; Deleye, Maarten; Saenen, Lore; Van Dooren, Wim; Onghena, Patrick

    2018-01-01

    When studying a complex research phenomenon, a mixed methods design allows to answer a broader set of research questions and to tap into different aspects of this phenomenon, compared to a monomethod design. This paper reports on how a sequential equal status design (QUAN ? QUAL) was used to examine students' reasoning processes when solving…

  5. A research on the importance function used in the calculation of the fracture probability through the optimum method

    International Nuclear Information System (INIS)

    Zegong, Zhou; Changhong, Liu

    1995-01-01

    On the basis of the research into original distribution function as the importance function after shifting an appropriate distance, this paper takes the variation of similar ratio of the original function to the importance function as the objective function, the optimum shifting distance obtained by use of an optimization method. The optimum importance function resulting from the optimization method can ensure that the number of Monte Carlo simulations is decreased and at the same time the good estimates of the yearly failure probabilities are obtained

  6. A Modified Generalized Fisher Method for Combining Probabilities from Dependent Tests

    Directory of Open Access Journals (Sweden)

    Hongying (Daisy eDai

    2014-02-01

    Full Text Available Rapid developments in molecular technology have yielded a large amount of high throughput genetic data to understand the mechanism for complex traits. The increase of genetic variants requires hundreds and thousands of statistical tests to be performed simultaneously in analysis, which poses a challenge to control the overall Type I error rate. Combining p-values from multiple hypothesis testing has shown promise for aggregating effects in high-dimensional genetic data analysis. Several p-value combining methods have been developed and applied to genetic data; see [Dai, et al. 2012b] for a comprehensive review. However, there is a lack of investigations conducted for dependent genetic data, especially for weighted p-value combining methods. Single nucleotide polymorphisms (SNPs are often correlated due to linkage disequilibrium. Other genetic data, including variants from next generation sequencing, gene expression levels measured by microarray, protein and DNA methylation data, etc. also contain complex correlation structures. Ignoring correlation structures among genetic variants may lead to severe inflation of Type I error rates for omnibus testing of p-values. In this work, we propose modifications to the Lancaster procedure by taking the correlation structure among p-values into account. The weight function in the Lancaster procedure allows meaningful biological information to be incorporated into the statistical analysis, which can increase the power of the statistical testing and/or remove the bias in the process. Extensive empirical assessments demonstrate that the modified Lancaster procedure largely reduces the Type I error rates due to correlation among p-values, and retains considerable power to detect signals among p-values. We applied our method to reassess published renal transplant data, and identified a novel association between B cell pathways and allograft tolerance.

  7. ABOUT PROBABILITY OF RESEARCH OF THE NN Ser SPECTRUM BY MODEL ATMOSPHERES METHOD

    OpenAIRE

    Sakhibullin, N. A.; Shimansky, V. V.

    2017-01-01

    The spectrum of close binary system NN Ser is investigated by a models atmospheres method. It is show that the atmosphere near the centrum of a hot spot on surface of red dwarf has powerful chromospheres, arising from heating in Laiman continua. Four models of binary system with various of parameters are constructed and their theoretical spectra are obtained. Temperature of white dwarf Tef = 62000 K, radius of the red dwarf RT = 0.20139 and angle inclination of system i = 82“ are determined. ...

  8. On the application of probability representations for estimation of the argon method resolution

    International Nuclear Information System (INIS)

    Kol'tsova, T.V.

    1976-01-01

    By considering the dating of amphiboles and biotites by the argon method, it is shown that there is a possibility to use the common F and t criteria for revealing any meaning difference in their ages. The dependence of the alternative inference of possible variations of the active parameters is considered, and a graphical procedure for selecting the optimum number of determinations for a given accuracy of analysis is suggested. The meaning difference in the age of amphiboles and biotites from the Northern Ladoga Lake region permits interesting conclusions to be made on the paleothermal history of the investigated rocks

  9. Forest regulation methods and silvicultural systems: what are they?

    Science.gov (United States)

    Ivan L. Sander; Burnell C. Fischer

    1989-01-01

    "Forest regulation methods" and "silvicultural systems" are important forest resource management concepts but there is much confusion about them. They often mean different things to different individuals. Confusion exists in part because "forest regulation methods" and "silvicultural systems" often use the same terminology. Also...

  10. Analysis of Cleaning Process for Several Kinds of Soil by Probability Density Functional Method.

    Science.gov (United States)

    Fujimoto, Akihiro; Tanaka, Terumasa; Oya, Masaru

    2017-10-01

    A method of analyzing the detergency of various soils by assuming normal distributions for the soil adhesion and soil removal forces was developed by considering the relationship between the soil type and the distribution profile of the soil removal force. The effect of the agitation speed on the soil removal was also analyzed by this method. Washing test samples were prepared by soiling fabrics with individual soils such as particulate soils, oily dyes, and water-soluble dyes. Washing tests were conducted using a Terg-O-Tometer and four repetitive washing cycles of 5 min each. The transition of the removal efficiencies was recorded in order to calculate the mean value (μ rl ) and the standard deviation (σ rl ) of the removal strength distribution. The level of detergency and the temporal alteration in the detergency can be represented by μ rl and σ rl , respectively. A smaller σ rl indicates a smaller increase in the detergency with time, which also indicates the existence of a certain amount of soil with a strong adhesion force. As a general trend, the values of σ rl were the greatest for the oily soils, followed by those of the water-soluble soils and particulate soils in succession. The relationship between the soil removal processes and the soil adhesion force was expressed on the basis of the transition of the distribution of residual soil. Evaluation of the effects of the agitation speed on µ rl and ơ rl showed that σ rl was not affected by the agitation speed; the value of µ rl for solid soil and oily soil increased with increasing agitation, and the µ rl of water-soluble soil was not specifically affected by the agitation speed. It can be assumed that the parameter ơ rl is related to the characteristics of the soil and the adhesion condition, and can be applied to estimating the soil removal mechanism.

  11. Advanced RESTART method for the estimation of the probability of failure of highly reliable hybrid dynamic systems

    International Nuclear Information System (INIS)

    Turati, Pietro; Pedroni, Nicola; Zio, Enrico

    2016-01-01

    The efficient estimation of system reliability characteristics is of paramount importance for many engineering applications. Real world system reliability modeling calls for the capability of treating systems that are: i) dynamic, ii) complex, iii) hybrid and iv) highly reliable. Advanced Monte Carlo (MC) methods offer a way to solve these types of problems, which are feasible according to the potentially high computational costs. In this paper, the REpetitive Simulation Trials After Reaching Thresholds (RESTART) method is employed, extending it to hybrid systems for the first time (to the authors’ knowledge). The estimation accuracy and precision of RESTART highly depend on the choice of the Importance Function (IF) indicating how close the system is to failure: in this respect, proper IFs are here originally proposed to improve the performance of RESTART for the analysis of hybrid systems. The resulting overall simulation approach is applied to estimate the probability of failure of the control system of a liquid hold-up tank and of a pump-valve subsystem subject to degradation induced by fatigue. The results are compared to those obtained by standard MC simulation and by RESTART with classical IFs available in the literature. The comparison shows the improvement in the performance obtained by our approach. - Highlights: • We consider the issue of estimating small failure probabilities in dynamic systems. • We employ the RESTART method to estimate the failure probabilities. • New Importance Functions (IFs) are introduced to increase the method performance. • We adopt two dynamic, hybrid, highly reliable systems as case studies. • A comparison with literature IFs proves the effectiveness of the new IFs.

  12. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    Science.gov (United States)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  13. BER Analysis Using Beat Probability Method of 3D Optical CDMA Networks with Double Balanced Detection

    Directory of Open Access Journals (Sweden)

    Chih-Ta Yen

    2015-01-01

    Full Text Available This study proposes novel three-dimensional (3D matrices of wavelength/time/spatial code for code-division multiple-access (OCDMA networks, with a double balanced detection mechanism. We construct 3D carrier-hopping prime/modified prime (CHP/MP codes by extending a two-dimensional (2D CHP code integrated with a one-dimensional (1D MP code. The corresponding coder/decoder pairs were based on fiber Bragg gratings (FBGs and tunable optical delay lines integrated with splitters/combiners. System performance was enhanced by the low cross correlation properties of the 3D code designed to avoid the beat noise phenomenon. The CHP/MP code cardinality increased significantly compared to the CHP code under the same bit error rate (BER. The results indicate that the 3D code method can enhance system performance because both the beating terms and multiple-access interference (MAI were reduced by the double balanced detection mechanism. Additionally, the optical component can also be relaxed for high transmission scenery.

  14. Qualification of the calculational methods of the fluence in the pressurised water reactors. Improvement of the cross sections treatment by the probability table method

    International Nuclear Information System (INIS)

    Zheng, S.H.

    1994-01-01

    It is indispensable to know the fluence on the nuclear reactor pressure vessel. The cross sections and their treatment have an important rule to this problem. In this study, two ''benchmarks'' have been interpreted by the Monte Carlo transport program TRIPOLI to qualify the calculational method and the cross sections used in the calculations. For the treatment of the cross sections, the multigroup method is usually used but it exists some problems such as the difficulty to choose the weighting function and the necessity of a great number of energy to represent well the cross section's fluctuation. In this thesis, we propose a new method called ''Probability Table Method'' to treat the neutron cross sections. For the qualification, a program of the simulation of neutron transport by the Monte Carlo method in one dimension has been written; the comparison of multigroup's results and probability table's results shows the advantages of this new method. The probability table has also been introduced in the TRIPOLI program; the calculational results of the iron deep penetration benchmark has been improved by comparing with the experimental results. So it is interest to use this new method in the shielding and neutronic calculation. (author). 42 refs., 109 figs., 36 tabs

  15. PDE-Foam - a probability-density estimation method using self-adapting phase-space binning

    CERN Document Server

    Dannheim, Dominik; Voigt, Alexander; Grahn, Karl-Johan; Speckmayer, Peter

    2009-01-01

    Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. To efficiently use large event samples to estimate the probability density, a binary search tree (range searching) is used in the PDE-RS implementation. It is a generalisation of standard likelihood methods and a powerful classification tool for problems with highly non-linearly correlated observables. In this paper, we present an innovative improvement of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multidimensional phase space, minimizing the variance of the signal and background densities inside the cells. The binned density information is stored in binary trees, allowing for a very ...

  16. A Bayesian-probability-based method for assigning protein backbone dihedral angles based on chemical shifts and local sequences

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jun; Liu Haiyan [University of Science and Technology of China, Hefei National Laboratory for Physical Sciences at the Microscale, and Key Laboratory of Structural Biology, School of Life Sciences (China)], E-mail: hyliu@ustc.edu.cn

    2007-01-15

    Chemical shifts contain substantial information about protein local conformations. We present a method to assign individual protein backbone dihedral angles into specific regions on the Ramachandran map based on the amino acid sequences and the chemical shifts of backbone atoms of tripeptide segments. The method uses a scoring function derived from the Bayesian probability for the central residue of a query tripeptide segment to have a particular conformation. The Ramachandran map is partitioned into representative regions at two levels of resolution. The lower resolution partitioning is equivalent to the conventional definitions of different secondary structure regions on the map. At the higher resolution level, the {alpha} and {beta} regions are further divided into subregions. Predictions are attempted at both levels of resolution. We compared our method with TALOS using the original TALOS database, and obtained comparable results. Although TALOS may produce the best results with currently available databases which are much enlarged, the Bayesian-probability-based approach can provide a quantitative measure for the reliability of predictions.

  17. Rapid, single-step most-probable-number method for enumerating fecal coliforms in effluents from sewage treatment plants

    Science.gov (United States)

    Munoz, E. F.; Silverman, M. P.

    1979-01-01

    A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.

  18. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    Science.gov (United States)

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869

  19. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys.

    Science.gov (United States)

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-09-01

    To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. We compared 148 MSM aged 18-64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010-2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%-95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. A new plan-scoring method using normal tissue complication probability for personalized treatment plan decisions in prostate cancer

    Science.gov (United States)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan

    2018-01-01

    The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.

  1. Mixed analytical-stochastic simulation method for the recovery of a Brownian gradient source from probability fluxes to small windows.

    Science.gov (United States)

    Dobramysl, U; Holcman, D

    2018-02-15

    Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.

  2. Method to determine transcriptional regulation pathways in organisms

    Science.gov (United States)

    Gardner, Timothy S.; Collins, James J.; Hayete, Boris; Faith, Jeremiah

    2012-11-06

    The invention relates to computer-implemented methods and systems for identifying regulatory relationships between expressed regulating polypeptides and targets of the regulatory activities of such regulating polypeptides. More specifically, the invention provides a new method for identifying regulatory dependencies between biochemical species in a cell. In particular embodiments, provided are computer-implemented methods for identifying a regulatory interaction between a transcription factor and a gene target of the transcription factor, or between a transcription factor and a set of gene targets of the transcription factor. Further provided are genome-scale methods for predicting regulatory interactions between a set of transcription factors and a corresponding set of transcriptional target substrates thereof.

  3. Methods for estimating annual exceedance-probability streamflows for streams in Kansas based on data through water year 2015

    Science.gov (United States)

    Painter, Colin C.; Heimann, David C.; Lanning-Rush, Jennifer L.

    2017-08-14

    A study was done by the U.S. Geological Survey in cooperation with the Kansas Department of Transportation and the Federal Emergency Management Agency to develop regression models to estimate peak streamflows of annual exceedance probabilities of 50, 20, 10, 4, 2, 1, 0.5, and 0.2 percent at ungaged locations in Kansas. Peak streamflow frequency statistics from selected streamgages were related to contributing drainage area and average precipitation using generalized least-squares regression analysis. The peak streamflow statistics were derived from 151 streamgages with at least 25 years of streamflow data through 2015. The developed equations can be used to predict peak streamflow magnitude and frequency within two hydrologic regions that were defined based on the effects of irrigation. The equations developed in this report are applicable to streams in Kansas that are not substantially affected by regulation, surface-water diversions, or urbanization. The equations are intended for use for streams with contributing drainage areas ranging from 0.17 to 14,901 square miles in the nonirrigation effects region and, 1.02 to 3,555 square miles in the irrigation-affected region, corresponding to the range of drainage areas of the streamgages used in the development of the regional equations.

  4. A general ray-tracing algorithm for the solution of the neutron transport equation by the collision probability method

    International Nuclear Information System (INIS)

    Ball, G.

    1990-01-01

    The development and analysis of methods for generating first-flight collision probabilities in two-dimensional geometries consistent with Light Water Moderated (LWR) fuel assemblies are examined. A new ray-tracing algorithm is discussed. A number of numerical results are given demonstrating the feasibility of this algorithm and the effects of the moderator (and fuel) sectorizations on the resulting flux distributions. The collision probabilties have been introduced and their subsequent utilization in the flux calculation procedures illustrated. A brief description of the Coxy-1 and Coxy-2 programs (which were developed in the Reactor Theory Division of the Atomic Energy Agency of South Africa Ltd) has also been added. 41 figs., 9 tabs., 18 refs

  5. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

  6. Generalized Probability-Probability Plots

    NARCIS (Netherlands)

    Mushkudiani, N.A.; Einmahl, J.H.J.

    2004-01-01

    We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

  7. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  8. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  9. Ignition Probability

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...

  10. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    International Nuclear Information System (INIS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Shen, Aiguo; Hu, Jiming; Jia, Jun

    2013-01-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory. (paper)

  11. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    Science.gov (United States)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  12. Preliminary studies of the kinetics of a reactor by the probability method; Etude preliminaire de la cinetique d'un reacteur par la methode des probabilites

    Energy Technology Data Exchange (ETDEWEB)

    Bruna, J G; Brunet, J P; Clouet D' Orval, Ch; Caizergues, R; Verriere, Ph [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1964-07-01

    The {alpha} decay constant of prompt neutrons has been studied in the homogeneous plutonium-fueled, light-water-moderated reactor Alecto, by the probability method. In this method, the probability to count one, two,.... neutrons during a given time is measured. The value of {alpha} can be deduced from this measurement, for various subcritical states of the reactor. The experimental results were then compared with values obtained, for the same reactivities, by the pulsed neutron technique. (authors) [French] On a etudie sur Alecto, reacteur homogene au plutonium, modere a l'eau legere, la constante de decroissance {alpha} des neutrons prompts par la methode des probabilites. Celle-ci consiste a mesurer la probabilite de compter un, deux, etc..., neutrons pendant un intervalle de temps donne. On a pu en deduire la valeur de {alpha}, dans divers etats sous-critiques du reacteur. On a compare les resultats experimentaux a d'autres valeurs obtenues, aux memes reactivites, par la methode des neutrons pulses. (auteurs)

  13. Ex Post Regulation as the Method of Sectoral Regulation in Electricity Sector

    Directory of Open Access Journals (Sweden)

    Rafał Nagaj

    2017-10-01

    Full Text Available Aim/purpose - The aim of the article is to present the essence of ex post approach to sectoral regulation, to show the advantages and disadvantages of ex post regulation and to answer the question whether it is worth using in the electricity sector. Design/methodology/approach - For this purpose, a critical analysis of expert literature was made and an empirical analysis of countries that have applied ex post regulation in the electricity sector in the European Union. Two research methods were used: a case study and a comparison of changes in price and quality of services. The research period covered the period 2000-2016. Findings - It was found that ex post regulation reduces regulatory costs, does not adversely affect the quality of service and long-term rates, gives businesses the freedom of decision-making and the ability to react quickly to changes in the economy. The main disadvantages of ex post regulation are the tendency for companies to over-estimate bills for consumers, the difficulty of pursuing claims by consumers and the need to shift regulatory risk to consumers. Research implications/limitations - In the paper there was identified a research gap, i.e. the effects of ex post regulation in the electricity sector in European Union countries where such regulation was applied. Identifying the research gap will help us understand what are the advantages and disadvantages of ex post regulation and will create a model for when it is good moment to implement this in the economy. Besides identifying the research gap, further studies will be required over ex post regulation. Originality/value/contribution - The additional value of the paper is the study of ex post regulation, its essence and types. The article analyzed the effects of ex post regulation in the electricity sector and provided valuable insights into the potential risks associated with this approach to economic regulation.

  14. Application and problems of probability methods in technical safety assessment in the field of nuclear engineering and other technologies

    International Nuclear Information System (INIS)

    Mathiak, E.; Schuetz, B.

    1980-01-01

    The authors explain purpose, latest developments and application of probabilistic methods in safety assessments of nuclear facilities, and of non-nuclear installations. Their findings show that the methods of probabilistic systems analysis and of structural reliability analysis proved to be successful, above all with regard to systematics and reproducibility. Above all probabilistic systems analyses have been applied to a large extent in the Rasmussen study. Although this study has been intended to present objective information on the risks to be expected from nuclear power plant operation, the results of the study have not been accepted by the public as an unbiased presentation. It is worth mentioning that in the opinion of a number of social scientists, solutions accepted by the whole of society cannot be reached by defining and adhering to risk standards, but rather by entering into discussions with those groups directly affected, working out compromises meeting all interests. Risk analyses supply information that facilitates practical planning of emergency measures. A description of probable accidents allows conclusions to be drawn in terms of quality and quantity as to how and to what extent appropriate precautionary measures can be taken and planned. Risk analyses offer the possibility of preventing damage hitherto known only by experience (e.g. through accident analyses) by precalculating possible events, and then initiating the required improvements. It is these positive effects that make up the importance of such analyses. (orig./HSCH) [de

  15. Comparing rapid methods for detecting Listeria in seafood and environmental samples using the most probably number (MPN) technique.

    Science.gov (United States)

    Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C

    2012-02-15

    The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Quantum Probabilities as Behavioral Probabilities

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2017-03-01

    Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

  17. Methodical approaches to development of classification state methods of regulation business activity in fishery

    OpenAIRE

    She Son Gun

    2014-01-01

    Approaches to development of classification of the state methods of regulation of economy are considered. On the basis of the provided review the complex method of state regulation of business activity is reasonable. The offered principles allow improving public administration and can be used in industry concepts and state programs on support of small business in fishery.

  18. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... analytic expression for the distribution function of a sum of random variables. The presence of heavy-tailed random variables complicates the problem even more. The objective of this dissertation is to provide better approximations by means of sharp asymptotic expressions and Monte Carlo estimators...

  19. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  20. Quantitative assessment of probability of failing safely for the safety instrumented system using reliability block diagram method

    International Nuclear Information System (INIS)

    Jin, Jianghong; Pang, Lei; Zhao, Shoutang; Hu, Bin

    2015-01-01

    Highlights: • Models of PFS for SIS were established by using the reliability block diagram. • The more accurate calculation of PFS for SIS can be acquired by using SL. • Degraded operation of complex SIS does not affect the availability of SIS. • The safe undetected failure is the largest contribution to the PFS of SIS. - Abstract: The spurious trip of safety instrumented system (SIS) brings great economic losses to production. How to ensure the safety instrumented system is reliable and available has been put on the schedule. But the existing models on spurious trip rate (STR) or probability of failing safely (PFS) are too simplified and not accurate, in-depth studies of availability to obtain more accurate PFS for SIS are required. Based on the analysis of factors that influence the PFS for the SIS, using reliability block diagram method (RBD), the quantitative study of PFS for the SIS is carried out, and gives some application examples. The results show that, the common cause failure will increase the PFS; degraded operation does not affect the availability of the SIS; if the equipment was tested and repaired one by one, the unavailability of the SIS can be ignored; the corresponding occurrence time of independent safe undetected failure should be the system lifecycle (SL) rather than the proof test interval and the independent safe undetected failure is the largest contribution to the PFS for the SIS

  1. Annihilation probability density and other applications of the Schwinger multichannel method to the positron and electron scattering

    International Nuclear Information System (INIS)

    Varella, Marcio Teixeira do Nascimento

    2001-12-01

    We have calculated annihilation probability densities (APD) for positron collisions against He atom and H 2 molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10 -2 eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e + -H 2 collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Z eff ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e - -H 2 O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)

  2. Method of regulating magnetic field of magnetic pole center

    International Nuclear Information System (INIS)

    Watanabe, Masao; Yamada, Teruo; Kato, Norihiko; Toda, Yojiro; Kaneda, Yasumasa.

    1978-01-01

    Purpose: To provide the subject method comprising using a plurality of magnetic metal pieces having different thicknesses, regulating very easily symmetry of the field of the magnetic pole center depending upon the combination of said metal pieces, thereby obtaining a magnetic field of high precision. Method: The regulation of magnetic field at the central part of the magnetic field is not depending only upon processing of the center plug, axial movement of trim coil and ion source but by providing a magnetic metal piece such as an iron ring, primary higher harmonics of the field at the center of the magnetic field can be regulated simply while the position of the ion source slit is on the equipotential surface in the field. (Yoshihara, H.)

  3. Estimation of flashover voltage probability of overhead line insulators under industrial pollution, based on maximum likelihood method

    International Nuclear Information System (INIS)

    Arab, M.N.; Ayaz, M.

    2004-01-01

    The performance of transmission line insulator is greatly affected by dust, fumes from industrial areas and saline deposit near the coast. Such pollutants in the presence of moisture form a coating on the surface of the insulator, which in turn allows the passage of leakage current. This leakage builds up to a point where flashover develops. The flashover is often followed by permanent failure of insulation resulting in prolong outages. With the increase in system voltage owing to the greater demand of electrical energy over the past few decades, the importance of flashover due to pollution has received special attention. The objective of the present work was to study the performance of overhead line insulators in the presence of contaminants such as induced salts. A detailed review of the literature and the mechanisms of insulator flashover due to the pollution are presented. Experimental investigations on the behavior of overhead line insulators under industrial salt contamination are carried out. A special fog chamber was designed in which the contamination testing of insulators was carried out. Flashover behavior under various degrees of contamination of insulators with the most common industrial fume components such as Nitrate and Sulphate compounds was studied. Substituting the normal distribution parameter in the probability distribution function based on maximum likelihood develops a statistical method. The method gives a high accuracy in the estimation of the 50% flashover voltage, which is then used to evaluate the critical flashover index at various contamination levels. The critical flashover index is a valuable parameter in insulation design for numerous applications. (author)

  4. Methods for ensuring compliance with regulatory requirements: regulators and operators

    International Nuclear Information System (INIS)

    Fleischmann, A.W.

    1989-01-01

    Some of the methods of ensuring compliance with regulatory requirements contained in various radiation protection documents such as Regulations, ICRP Recommendations etc. are considered. These include radiation safety officers and radiation safety committees, personnel monitoring services, dissemination of information, inspection services and legislative power of enforcement. Difficulties in ensuring compliance include outmoded legislation, financial and personnel constraints

  5. Probability tales

    CERN Document Server

    Grinstead, Charles M; Snell, J Laurie

    2011-01-01

    This book explores four real-world topics through the lens of probability theory. It can be used to supplement a standard text in probability or statistics. Most elementary textbooks present the basic theory and then illustrate the ideas with some neatly packaged examples. Here the authors assume that the reader has seen, or is learning, the basic theory from another book and concentrate in some depth on the following topics: streaks, the stock market, lotteries, and fingerprints. This extended format allows the authors to present multiple approaches to problems and to pursue promising side discussions in ways that would not be possible in a book constrained to cover a fixed set of topics. To keep the main narrative accessible, the authors have placed the more technical mathematical details in appendices. The appendices can be understood by someone who has taken one or two semesters of calculus.

  6. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    Science.gov (United States)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  7. Probability theory

    CERN Document Server

    Dorogovtsev, A Ya; Skorokhod, A V; Silvestrov, D S; Skorokhod, A V

    1997-01-01

    This book of problems is intended for students in pure and applied mathematics. There are problems in traditional areas of probability theory and problems in the theory of stochastic processes, which has wide applications in the theory of automatic control, queuing and reliability theories, and in many other modern science and engineering fields. Answers to most of the problems are given, and the book provides hints and solutions for more complicated problems.

  8. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  9. Design and Selection of Machine Learning Methods Using Radiomics and Dosiomics for Normal Tissue Complication Probability Modeling of Xerostomia

    Directory of Open Access Journals (Sweden)

    Hubert S. Gabryś

    2018-03-01

    Full Text Available PurposeThe purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP models based on the mean radiation dose to parotid glands.Material and methodsA cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0–6 months (early, 6–15 months (late, 15–24 months (long-term, and at any time (a longitudinal model after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis.ResultsNTCP models based on the parotid mean dose failed to predict xerostomia (AUCs < 0.60. The most informative predictors were found for late and long-term xerostomia. Late xerostomia correlated with the contralateral dose gradient in the anterior–posterior (AUC = 0.72 and the right–left (AUC = 0.68 direction, whereas long-term xerostomia was associated with parotid volumes (AUCs > 0.85, dose gradients in the right–left (AUCs > 0.78, and the anterior–posterior (AUCs > 0.72 direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose–volume histogram (DVH spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing

  10. Use of probability methods in prospecting-exploration in looking for oil. Primeneniye veroyatnostnykh metodov v poiskovo-razvedochnykh rabotakh na neft'

    Energy Technology Data Exchange (ETDEWEB)

    Kharbukh, Dzh U; Davton, Dzh Kh; Devis, Dzh K

    1981-01-01

    The experience of using probability methods in different geological conditions on the US territory is generalized. The efficiency of using systems analysis, imitation modeling of prospecting-exploration process and conditions for arrangement of fields, machine processing of data in plotting different types of structural maps, probability forecasting of the presence of fields is shown. Especial attention is focused on nonstructural traps. A brief dictionary of terms is presented used in the mathematical apparatus and the computer in oil geology.

  11. Probability of satellite collision

    Science.gov (United States)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  12. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

  13. Design and Selection of Machine Learning Methods Using Radiomics and Dosiomics for Normal Tissue Complication Probability Modeling of Xerostomia.

    Science.gov (United States)

    Gabryś, Hubert S; Buettner, Florian; Sterzing, Florian; Hauswald, Henrik; Bangert, Mark

    2018-01-01

    The purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP) models based on the mean radiation dose to parotid glands. A cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0-6 months (early), 6-15 months (late), 15-24 months (long-term), and at any time (a longitudinal model) after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC) of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis. NTCP models based on the parotid mean dose failed to predict xerostomia (AUCs  0.85), dose gradients in the right-left (AUCs > 0.78), and the anterior-posterior (AUCs > 0.72) direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose-volume histogram (DVH) spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing methods. We demonstrated that incorporation of organ- and dose-shape descriptors is beneficial for xerostomia prediction in highly conformal radiotherapy treatments. Due to strong reliance on patient-specific, dose-independent factors, our results underscore the need for development of personalized data-driven risk profiles for NTCP models of xerostomia. The facilitated

  14. Calendar methods of fertility regulation: a rule of thumb.

    Science.gov (United States)

    Colombo, B; Scarpa, B

    1996-01-01

    "[Many] illiterate women, particularly in the third world, find [it] difficult to apply usual calendar methods for the regulation of fertility. Some of them are even unable to make simple subtractions. In this paper we are therefore trying to evaluate the applicability and the efficiency of an extremely simple rule which entails only [the ability to count] a number of days, and always the same way." (SUMMARY IN ITA) excerpt

  15. Methods for Reducing Normal Tissue Complication Probabilities in Oropharyngeal Cancer: Dose Reduction or Planning Target Volume Elimination

    Energy Technology Data Exchange (ETDEWEB)

    Samuels, Stuart E.; Eisbruch, Avraham; Vineberg, Karen; Lee, Jae; Lee, Choonik; Matuszak, Martha M.; Ten Haken, Randall K.; Brock, Kristy K., E-mail: kbrock@med.umich.edu

    2016-11-01

    Purpose: Strategies to reduce the toxicities of head and neck radiation (ie, dysphagia [difficulty swallowing] and xerostomia [dry mouth]) are currently underway. However, the predicted benefit of dose and planning target volume (PTV) reduction strategies is unknown. The purpose of the present study was to compare the normal tissue complication probabilities (NTCP) for swallowing and salivary structures in standard plans (70 Gy [P70]), dose-reduced plans (60 Gy [P60]), and plans eliminating the PTV margin. Methods and Materials: A total of 38 oropharyngeal cancer (OPC) plans were analyzed. Standard organ-sparing volumetric modulated arc therapy plans (P70) were created and then modified by eliminating the PTVs and treating the clinical tumor volumes (CTVs) only (C70) or maintaining the PTV but reducing the dose to 60 Gy (P60). NTCP dose models for the pharyngeal constrictors, glottis/supraglottic larynx, parotid glands (PGs), and submandibular glands (SMGs) were analyzed. The minimal clinically important benefit was defined as a mean change in NTCP of >5%. The P70 NTCP thresholds and overlap percentages of the organs at risk with the PTVs (56-59 Gy, vPTV{sub 56}) were evaluated to identify the predictors for NTCP improvement. Results: With the P60 plans, only the ipsilateral PG (iPG) benefited (23.9% vs 16.2%; P<.01). With the C70 plans, only the iPG (23.9% vs 17.5%; P<.01) and contralateral SMG (cSMG) (NTCP 32.1% vs 22.9%; P<.01) benefited. An iPG NTCP threshold of 20% and 30% predicted NTCP benefits for the P60 and C70 plans, respectively (P<.001). A cSMG NTCP threshold of 30% predicted for an NTCP benefit with the C70 plans (P<.001). Furthermore, for the iPG, a vPTV{sub 56} >13% predicted benefit with P60 (P<.001) and C70 (P=.002). For the cSMG, a vPTV{sub 56} >22% predicted benefit with C70 (P<.01). Conclusions: PTV elimination and dose-reduction lowered the NTCP of the iPG, and PTV elimination lowered the NTCP of the cSMG. NTCP thresholds and the

  16. Statistical methods to quantify the effect of mite parasitism on the probability of death in honey bee colonies

    Science.gov (United States)

    Varroa destructor is a mite parasite of European honey bees, Apis mellifera, that weakens the population, can lead to the death of an entire honey bee colony, and is believed to be the parasite with the most economic impact on beekeeping. The purpose of this study was to estimate the probability of ...

  17. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri.

    Science.gov (United States)

    2014-01-01

    Regression analysis techniques were used to develop a : set of equations for rural ungaged stream sites for estimating : discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent : annual exceedance probabilities, which are equivalent to : ann...

  18. Using experts’ consensus (the Delphi method) to evaluate weighting techniques in web surveys not based on probability schemes

    NARCIS (Netherlands)

    Toepoel, V.; Emerson, Hannah

    2017-01-01

    Weighting techniques in web surveys based on no probability schemes are devised to correct biases due to self-selection, undercoverage, and nonresponse. In an interactive panel, 38 survey experts addressed weighting techniques and auxiliary variables in web surveys. Most of them corrected all biases

  19. Safety evaluation and regulation of chemicals. 2. Impact of regulations - improvement of methods

    Energy Technology Data Exchange (ETDEWEB)

    Homburger, F [ed.

    1985-01-01

    This volume assesses the impact of new scientific knowledge on the testing and regulation of chemicals, including food additives, drugs, cosmetics, pesticides, and other commercial substances. Apart from describing the newest tests, regulations, and risk assessment strategies, chapters reflect changes forced by both the growing need for cost containment and the mounting pressure to find alternatives to animal testing. Based on an international congress, the book also brings the advantage of diversity in the background and nationality of the authors, thus allowing a view of central problems according to the different interests of academics, industry scientists, government scientists, and regulators. The book opens with coverage of national and international regulations designed to prevent and control damage to human health and the environment. Topics range from basic problems of policy design and enforcement to the specific requirements for chemical regulation in developing countries. The next chapters cover new tests, systems, and assays used in in vivo safety testing. Readers will find a critical assessment of tests used to determine teratogenicity, mutagenicity, carcinogenicity, neurotoxicity and chemical lethality. Other topics include factors operating in the public perception of chemical hazards, guidelines for decision making in the management and regulation of risks, and future trends in the methodology of safety evaluation. The volume concludes with an overview of in vitro methods for testing hepatotoxicity. Several short-term in vitro test models and limited in vivo bioassays are presented and evaluated in terms of their capacity to substitute for long-term animal studies. Expert and thorough in its coverage, the book offers a wealth of technical and practical information for toxicologists, pharmacologists, industrial policy makers, and government regulators. (orig.). With 67 figs., 34 tabs.

  20. Quantification of the probable effects of alternative in-river harvest regulations on recovery of Snake River fall chinook salmon. Final report

    International Nuclear Information System (INIS)

    Cramer, S.P.; Vigg, S.

    1996-03-01

    The goal of this study was to quantify the probable effects that alternative strategies for managing in-river harvest would have on recovery of Snake River fall chinook salmon. This report presents the analysis of existing data to quantify the way in which various in-river harvest strategies catch Snake River bright (SRB) fall chinook. Because there has been disagreement among experts regarding the magnitude of in-river harvest impacts on Snake River fall chinook, the authors compared the results from using the following three different methods to estimate in-river harvest rates: (1) use of run reconstruction through stock accounting of escapement and landings data to estimate harvest rate of SRB chinook in Zone 6 alone; (2) use of Coded Wire Tag (CWT) recoveries of fall chinook from Lyons Ferry Hatchery in a cohort analysis to estimate age and sex specific harvest rates for Zone 6 and for below Bonneville Dam; (3) comparison of harvest rates estimated for SRB chinook by the above methods to those estimated by the same methods for Upriver Bright (URB) fall chinook

  1. Comparing a recursive digital filter with the moving-average and sequential probability-ratio detection methods for SNM portal monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.

    1993-01-01

    The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal

  2. Transition probabilities for atoms

    International Nuclear Information System (INIS)

    Kim, Y.K.

    1980-01-01

    Current status of advanced theoretical methods for transition probabilities for atoms and ions is discussed. An experiment on the f values of the resonance transitions of the Kr and Xe isoelectronic sequences is suggested as a test for the theoretical methods

  3. Anomalies in the Fujikawa method using parameter-dependent regulators

    International Nuclear Information System (INIS)

    Urrutia, L.F.; Vergara, J.D.

    1992-01-01

    We propose an extended definition of the regularized Jacobian which allows the calculation of anomalies using parameter-dependent regulators in the Fujikawa approach. This extension incorporates the basic Green's function of the problem in the regularized Jacobian, allowing us to interpret a specific regularization procedure as a way of selecting the finite part of the Green's function, in complete analogy with what is done at the level of the effective action. In this way we are able to consider the effect of counterterms in the regularized Jacobian in order to relate different regularization procedures. We also discuss the ambiguities that arise in our prescription due to some freedom in the place where we can insert the regulator, using charge-conjugation invariance as a guiding principle. The method is applied to the case of vector and axial-vector anomalies in two- and four-dimensional quantum electrodynamics. In the first situation we recover the standard family of anomalies calculated by the point-splitting regularization prescription. We also study in detail an alternative choice in the position of the regulator and we calculate explicitly all the currents that generate the families of anomalies that we are considering. Next we extend the calculation to four dimensions, using the same prescriptions as before, and we compare the results with those obtained from the point-splitting calculation, which we also perform in the case of the vector anomaly. A discussion of the relation among the results obtained by different regularization prescriptions is given in terms of the allowed counterterms in the regularized Jacobian, which are highly constrained by the requirement of charge-conjugation invariance

  4. Probability for statisticians

    CERN Document Server

    Shorack, Galen R

    2017-01-01

    This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

  5. Method for estimating failure probabilities of structural components and its application to fatigue problem of internally cooled superconductors

    International Nuclear Information System (INIS)

    Shibui, M.

    1989-01-01

    A new method for fatigue-life assessment of a component containing defects is presented such that a probabilistic approach is incorporated into the CEGB two-criteria method. The present method assumes that aspect ratio of initial defect, proportional coefficient of fatigue crack growth law and threshold stress intensity range are treated as random variables. Examples are given to illustrate application of the method to the reliability analysis of conduit for an internally cooled cabled superconductor (ICCS) subjected to cyclic quench pressure. The possible failure mode and mechanical properties contributing to the fatigue life of the thin conduit are discussed using analytical and experimental results. 9 refs., 9 figs

  6. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    Science.gov (United States)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  7. Series-parallel method of direct solar array regulation

    Science.gov (United States)

    Gooder, S. T.

    1976-01-01

    A 40 watt experimental solar array was directly regulated by shorting out appropriate combinations of series and parallel segments of a solar array. Regulation switches were employed to control the array at various set-point voltages between 25 and 40 volts. Regulation to within + or - 0.5 volt was obtained over a range of solar array temperatures and illumination levels as an active load was varied from open circuit to maximum available power. A fourfold reduction in regulation switch power dissipation was achieved with series-parallel regulation as compared to the usual series-only switching for direct solar array regulation.

  8. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  9. Comparative analysis of methods for modelling the short-term probability distribution of extreme wind turbine loads

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov

    2016-01-01

    We have tested the performance of statistical extrapolation methods in predicting the extreme response of a multi-megawatt wind turbine generator. We have applied the peaks-over-threshold, block maxima and average conditional exceedance rates (ACER) methods for peaks extraction, combined with four...... levels, based on the assumption that the response tail is asymptotically Gumbel distributed. Example analyses were carried out, aimed at comparing the different methods, analysing the statistical uncertainties and identifying the factors, which are critical to the accuracy and reliability...

  10. A new probability density function for spatial distribution of soil water storage capacity leads to SCS curve number method

    OpenAIRE

    Wang, Dingbao

    2018-01-01

    Following the Budyko framework, soil wetting ratio (the ratio between soil wetting and precipitation) as a function of soil storage index (the ratio between soil wetting capacity and precipitation) is derived from the SCS-CN method and the VIC type of model. For the SCS-CN method, soil wetting ratio approaches one when soil storage index approaches infinity, due to the limitation of the SCS-CN method in which the initial soil moisture condition is not explicitly represented. However, for the ...

  11. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    Science.gov (United States)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  12. Application and problems of probability methods in technical safety assessment in the field of nuclear engineering and other technologies

    International Nuclear Information System (INIS)

    Heuser, F.W.

    1980-01-01

    On the basis of a deterministic safety concept that has been developed in nuclear engineering, approaches for a probabilistic interpretation of existing safety requirements and for a further risk assessment are described. The procedures in technical reliability analysis and its application in nuclear engineering are discussed. By the example of a reliability analysis for a reactor protection system the author discusses the question as to what extent methods of reliability analysis can be used to interpret deterministically derived safety requirements. The the author gives a survey of the current value and application of probabilistic reliability assessments in non-nuclear technology. The last part of this report deals with methods of risk analysis and its use for safety assessment in nuclear engineering. On the basis of WASH 1,400 the most important phases and tasks of research work in risk assessment are explained, showing the basic criteria and the methods to be applied in risk analysis. (orig./HSCH) [de

  13. The calibration of rating models estimation of the probability of default based on advanced pattern classification methods

    CERN Document Server

    Konrad, Paul Markus

    2014-01-01

    All across Europe, a drama of historical proportions is unfolding as the debt crisis continues to rock the worldwide financial landscape. Whilst insecurity rises, the general public, policy makers, scientists and academics are searching high and low for independent and objective analyses that may help to assess this unusual situation. For more than a century, rating agencies had developed methods and standards to evaluate and analyze companies, projects or even sovereign countries. However, due to their dated internal processes, the independence of these rating agencies is being questioned, ra

  14. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  15. Panel presentation: Should some type of incentive regulation replace traditional methods for regulating LDCs?

    International Nuclear Information System (INIS)

    Costello, K.W.

    1992-01-01

    State regulators should consider new approaches to regulating LDCs. They should seriously look at different incentive systems, even if only as an experiment, to address the major inefficiencies they see plaguing LDCs. Regulators have become more receptive in recent years to applying different incentive systems for historically heavily regulated industries such as the telecommunications and electric industries. In view of prevailing conditions in the natural gas industry, there is no good reason why regulators should not be as receptive to applying incentive systems for LDCs. For gas services offered in competitive markets, regulators should ask themselves whether regulation is necessary any longer. For services still requiring regulation, regulators should explore whether changes in traditional regulation are needed. While some PUCs have undertaken new regulatory practices, the question before them today is whether they should do more; whether, for example, states should accelerate their efforts toward adopting more flexible pricing and other incentive-based regulations or toward considering deregulating selected services. PUCs have different options. They can choose from among a large number of incentive systems. Their choices should hinge upon what they view as major sources of inefficiencies. For example, if uneconomical bypass is perceived as a problem then different price rules might constitute the cornerstone of an incentive-based policy. On the other hand, if excessive purchased-gas costs seem to be a major problem, a PUC may want to consider abolishing the PGA or modifying it in a form that would eliminate the cost-plus component

  16. Oscillator strengths and transition probabilities from the Breit–Pauli R-matrix method: Ne IV

    Energy Technology Data Exchange (ETDEWEB)

    Nahar, Sultana N., E-mail: nahar@astronomy.ohio-state.edu

    2014-09-15

    The atomic parameters–oscillator strengths, line strengths, radiative decay rates (A), and lifetimes–for fine structure transitions of electric dipole (E1) type for the astrophysically abundant ion Ne IV are presented. The results include 868 fine structure levels with n≤ 10, l≤ 9, and 1/2≤J≤ 19/2 of even and odd parities, and the corresponding 83,767 E1 transitions. The calculations were carried out using the relativistic Breit–Pauli R-matrix method in the close coupling approximation. The transitions have been identified spectroscopically using an algorithm based on quantum defect analysis and other criteria. The calculated energies agree with the 103 observed and identified energies to within 3% or better for most of the levels. Some larger differences are also noted. The A-values show good to fair agreement with the very limited number of available transitions in the table compiled by NIST, but show very good agreement with the latest published multi-configuration Hartree–Fock calculations. The present transitions should be useful for diagnostics as well as for precise and complete spectral modeling in the soft X-ray to infra-red regions of astrophysical and laboratory plasmas. -- Highlights: •The first application of BPRM method for accurate E1 transitions in Ne IV is reported. •Amount of atomic data (n going up to 10) is complete for most practical applications. •The calculated energies are in very good agreement with most observed levels. •Very good agreement of A-values and lifetimes with other relativistic calculations. •The results should provide precise nebular abundances, chemical evolution etc.

  17. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....

  18. Dysphonic Voice Pattern Analysis of Patients in Parkinson’s Disease Using Minimum Interclass Probability Risk Feature Selection and Bagging Ensemble Learning Methods

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2017-01-01

    Full Text Available Analysis of quantified voice patterns is useful in the detection and assessment of dysphonia and related phonation disorders. In this paper, we first study the linear correlations between 22 voice parameters of fundamental frequency variability, amplitude variations, and nonlinear measures. The highly correlated vocal parameters are combined by using the linear discriminant analysis method. Based on the probability density functions estimated by the Parzen-window technique, we propose an interclass probability risk (ICPR method to select the vocal parameters with small ICPR values as dominant features and compare with the modified Kullback-Leibler divergence (MKLD feature selection approach. The experimental results show that the generalized logistic regression analysis (GLRA, support vector machine (SVM, and Bagging ensemble algorithm input with the ICPR features can provide better classification results than the same classifiers with the MKLD selected features. The SVM is much better at distinguishing normal vocal patterns with a specificity of 0.8542. Among the three classification methods, the Bagging ensemble algorithm with ICPR features can identify 90.77% vocal patterns, with the highest sensitivity of 0.9796 and largest area value of 0.9558 under the receiver operating characteristic curve. The classification results demonstrate the effectiveness of our feature selection and pattern analysis methods for dysphonic voice detection and measurement.

  19. Development of damage evaluation method on the brittle materials for constructions using microscopic structural dynamics and probability theory

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1997-01-01

    The conventional stress analysis evaluation of the ceramic apparatuses is due to a perfect model of continuous mechanical materials. Such approximate and simplified treatment is thought to be unsufficient with the following two reasons. At first, because of changes of materials mechanical properties with manufacturing conditions and presence of limit in experimentalismic understanding, establishment of quantitative guideline for improvement of materials and structures and general understanding of thermo-mechanical property change due to neutron radiation becomes difficult. The second, because of statistical change of mechanical property and others containing fracture condition at various loading types, judgement standard of conventional deterministic evaluation is apt to be conservative and causes inferior performance and economics of the constructions under their using conditions. Therefore, in this study, following two basic approaches are planned; 1) Preparation of material deformation and fracture model considering correlation between microscopic/mesoscopic damage and macroscopic behavior, and 2) Improvement of the finite element method calculation due to parallel treatment for soundness and reliability evaluation of the construction. (G.K.)

  20. Scaling Qualitative Probability

    OpenAIRE

    Burgin, Mark

    2017-01-01

    There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...

  1. Dynamic phase transitions of the Blume–Emery–Griffiths model under an oscillating external magnetic field by the path probability method

    International Nuclear Information System (INIS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-01-01

    By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume–Emery–Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory. - Highlights: • Dynamic magnetic behavior of the Blume–Emery–Griffiths system is investigated by using the path probability method. • The time variations of average magnetizations are studied to find the phases. • The temperature dependence of the dynamic magnetizations is investigated to obtain the dynamic phase transition points. • We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory

  2. Dynamic phase transitions of the Blume–Emery–Griffiths model under an oscillating external magnetic field by the path probability method

    Energy Technology Data Exchange (ETDEWEB)

    Ertaş, Mehmet, E-mail: mehmetertas@erciyes.edu.tr; Keskin, Mustafa

    2015-03-01

    By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume–Emery–Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory. - Highlights: • Dynamic magnetic behavior of the Blume–Emery–Griffiths system is investigated by using the path probability method. • The time variations of average magnetizations are studied to find the phases. • The temperature dependence of the dynamic magnetizations is investigated to obtain the dynamic phase transition points. • We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.

  3. Panel presentation: Should some type of incentive regulation replace traditional methods for regulating LDCs?

    International Nuclear Information System (INIS)

    Turner, J.L.

    1992-01-01

    This paper reviews the advantages and disadvantages of using incentive regulation to provide the best service and rates for natural gas consumers and compares it to the traditional rate-of-return regulation. It discusses some of the allegations used to prevent incentive regulation such as the rate-of-return regulation provides an incentive to over-build and pad rate base, thus creating inefficiencies. The author also feels that strict competition is not necessarily beneficial and that some form of regulation is necessary. The paper goes on to outline the author's ideas of how a successful incentive plan should work while emphasizing his preference for a rate-of-return regulation. From the ratepayers' view, the incentives granted should be rewards for improvement in a utility's performance. In other words, there must be clear goals set for management and the fulfillment or lack of fulfillment should result in rewards or penalties. The author feels that incentive regulation could prove to be appropriate in the areas of demand side management such as energy conservation programs

  4. Continuous-energy adjoint flux and perturbation calculation using the iterated fission probability method in Monte-Carlo code TRIPOLI-4 and underlying applications

    International Nuclear Information System (INIS)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.

    2013-01-01

    The first goal of this paper is to present an exact method able to precisely evaluate very small reactivity effects with a Monte Carlo code (<10 pcm). it has been decided to implement the exact perturbation theory in TRIPOLI-4 and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4 is described. To illustrate the efficiency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the 'direct' estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the 'direct' method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. It offers the possibility to split reactivity contributions on both isotopes and reactions. Other applications of this perturbation method are presented and tested like the calculation of exact kinetic parameters (βeff, Λeff) or sensitivity parameters

  5. A new method for estimating the probable maximum hail loss of a building portfolio based on hailfall intensity determined by radar measurements

    Science.gov (United States)

    Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.

    2003-04-01

    Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.

  6. Effect and the probable mechanisms of silibinin in regulating insulin resistance in the liver of rats with non-alcoholic fatty liver

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Jiayin; Zhi, Min; Gao, Xiang; Hu, Pinjin; Li, Chujun; Yang, Xiaobo [Department of Gastroenterology, The Sixth Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province (China)

    2013-03-15

    Our previous study has shown that reduced insulin resistance (IR) was one of the possible mechanisms for the therapeutic effect of silibinin on non-alcoholic fatty liver disease (NAFLD) in rats. In the present study, we investigated the pathways of silibinin in regulating hepatic glucose production and IR amelioration. Forty-five 4- to 6-week-old male Sprague Dawley rats were divided into a control group, an HFD group (high-fat diet for 6 weeks) and an HFD + silibinin group (high-fat diet + 0.5 mg kg{sup -1}·day{sup -1} silibinin, starting at the beginning of the protocol). Both subcutaneous and visceral fat was measured. Homeostasis model assessment-IR index (HOMA-IR), intraperitoneal glucose tolerance test and insulin tolerance test (ITT) were performed. The expression of adipose triglyceride lipase (ATGL) and of genes associated with hepatic gluconeogenesis was evaluated. Silibinin intervention significantly protected liver function, down-regulated serum fat, and improved IR, as shown by decreased HOMA-IR and increased ITT slope. Silibinin markedly prevented visceral obesity by reducing visceral fat, enhanced lipolysis by up-regulating ATGL expression and inhibited gluconeogenesis by down-regulating associated genes such as Forkhead box O1, phosphoenolpyruvate carboxykinase and glucose-6-phosphatase. Silibinin was effective in ameliorating IR in NAFLD rats. Reduction of visceral obesity, enhancement of lipolysis and inhibition of gluconeogenesis might be the underlying mechanisms.

  7. Effect and the probable mechanisms of silibinin in regulating insulin resistance in the liver of rats with non-alcoholic fatty liver

    International Nuclear Information System (INIS)

    Yao, Jiayin; Zhi, Min; Gao, Xiang; Hu, Pinjin; Li, Chujun; Yang, Xiaobo

    2013-01-01

    Our previous study has shown that reduced insulin resistance (IR) was one of the possible mechanisms for the therapeutic effect of silibinin on non-alcoholic fatty liver disease (NAFLD) in rats. In the present study, we investigated the pathways of silibinin in regulating hepatic glucose production and IR amelioration. Forty-five 4- to 6-week-old male Sprague Dawley rats were divided into a control group, an HFD group (high-fat diet for 6 weeks) and an HFD + silibinin group (high-fat diet + 0.5 mg kg -1 ·day -1 silibinin, starting at the beginning of the protocol). Both subcutaneous and visceral fat was measured. Homeostasis model assessment-IR index (HOMA-IR), intraperitoneal glucose tolerance test and insulin tolerance test (ITT) were performed. The expression of adipose triglyceride lipase (ATGL) and of genes associated with hepatic gluconeogenesis was evaluated. Silibinin intervention significantly protected liver function, down-regulated serum fat, and improved IR, as shown by decreased HOMA-IR and increased ITT slope. Silibinin markedly prevented visceral obesity by reducing visceral fat, enhanced lipolysis by up-regulating ATGL expression and inhibited gluconeogenesis by down-regulating associated genes such as Forkhead box O1, phosphoenolpyruvate carboxykinase and glucose-6-phosphatase. Silibinin was effective in ameliorating IR in NAFLD rats. Reduction of visceral obesity, enhancement of lipolysis and inhibition of gluconeogenesis might be the underlying mechanisms

  8. Effect and the probable mechanisms of silibinin in regulating insulin resistance in the liver of rats with non-alcoholic fatty liver

    Directory of Open Access Journals (Sweden)

    Jiayin Yao

    Full Text Available Our previous study has shown that reduced insulin resistance (IR was one of the possible mechanisms for the therapeutic effect of silibinin on non-alcoholic fatty liver disease (NAFLD in rats. In the present study, we investigated the pathways of silibinin in regulating hepatic glucose production and IR amelioration. Forty-five 4- to 6-week-old male Sprague Dawley rats were divided into a control group, an HFD group (high-fat diet for 6 weeks and an HFD + silibinin group (high-fat diet + 0.5 mg kg-1·day-1 silibinin, starting at the beginning of the protocol. Both subcutaneous and visceral fat was measured. Homeostasis model assessment-IR index (HOMA-IR, intraperitoneal glucose tolerance test and insulin tolerance test (ITT were performed. The expression of adipose triglyceride lipase (ATGL and of genes associated with hepatic gluconeogenesis was evaluated. Silibinin intervention significantly protected liver function, down-regulated serum fat, and improved IR, as shown by decreased HOMA-IR and increased ITT slope. Silibinin markedly prevented visceral obesity by reducing visceral fat, enhanced lipolysis by up-regulating ATGL expression and inhibited gluconeogenesis by down-regulating associated genes such as Forkhead box O1, phosphoenolpyruvate carboxykinase and glucose-6-phosphatase. Silibinin was effective in ameliorating IR in NAFLD rats. Reduction of visceral obesity, enhancement of lipolysis and inhibition of gluconeogenesis might be the underlying mechanisms.

  9. Probability of Transmission of Malaria from Mosquito to Human Is Regulated by Mosquito Parasite Density in Naïve and Vaccinated Hosts.

    Directory of Open Access Journals (Sweden)

    Thomas S Churcher

    2017-01-01

    Full Text Available Over a century since Ronald Ross discovered that malaria is caused by the bite of an infectious mosquito it is still unclear how the number of parasites injected influences disease transmission. Currently it is assumed that all mosquitoes with salivary gland sporozoites are equally infectious irrespective of the number of parasites they harbour, though this has never been rigorously tested. Here we analyse >1000 experimental infections of humans and mice and demonstrate a dose-dependency for probability of infection and the length of the host pre-patent period. Mosquitoes with a higher numbers of sporozoites in their salivary glands following blood-feeding are more likely to have caused infection (and have done so quicker than mosquitoes with fewer parasites. A similar dose response for the probability of infection was seen for humans given a pre-erythrocytic vaccine candidate targeting circumsporozoite protein (CSP, and in mice with and without transfusion of anti-CSP antibodies. These interventions prevented infection more efficiently from bites made by mosquitoes with fewer parasites. The importance of parasite number has widespread implications across malariology, ranging from our basic understanding of the parasite, how vaccines are evaluated and the way in which transmission should be measured in the field. It also provides direct evidence for why the only registered malaria vaccine RTS,S was partially effective in recent clinical trials.

  10. SOME ASPECTS OF THE USE OF MATHEMATICAL-STATISTICAL METHODS IN THE ANALYSIS OF SOCIO-HUMANISTIC TEXTS Humanities and social text, mathematics, method, statistics, probability

    Directory of Open Access Journals (Sweden)

    Zaira M Alieva

    2016-01-01

    Full Text Available The article analyzes the application of mathematical and statistical methods in the analysis of socio-humanistic texts. The essence of mathematical and statistical methods, presents examples of their use in the study of Humanities and social phenomena. Considers the key issues faced by the expert in the application of mathematical-statistical methods in socio-humanitarian sphere, including the availability of sustainable contrasting socio-humanitarian Sciences and mathematics; the complexity of the allocation of the object that is the bearer of the problem; having the use of a probabilistic approach. The conclusion according to the results of the study.

  11. A low false negative filter for detecting rare bird species from short video segments using a probable observation data set-based EKF method.

    Science.gov (United States)

    Song, Dezhen; Xu, Yiliang

    2010-09-01

    We report a new filter to assist the search for rare bird species. Since a rare bird only appears in front of a camera with very low occurrence (e.g., less than ten times per year) for very short duration (e.g., less than a fraction of a second), our algorithm must have a very low false negative rate. We verify the bird body axis information with the known bird flying dynamics from the short video segment. Since a regular extended Kalman filter (EKF) cannot converge due to high measurement error and limited data, we develop a novel probable observation data set (PODS)-based EKF method. The new PODS-EKF searches the measurement error range for all probable observation data that ensures the convergence of the corresponding EKF in short time frame. The algorithm has been extensively tested using both simulated inputs and real video data of four representative bird species. In the physical experiments, our algorithm has been tested on rock pigeons and red-tailed hawks with 119 motion sequences. The area under the ROC curve is 95.0%. During the one-year search of ivory-billed woodpeckers, the system reduces the raw video data of 29.41 TB to only 146.7 MB (reduction rate 99.9995%).

  12. On Probability Leakage

    OpenAIRE

    Briggs, William M.

    2012-01-01

    The probability leakage of model M with respect to evidence E is defined. Probability leakage is a kind of model error. It occurs when M implies that events $y$, which are impossible given E, have positive probability. Leakage does not imply model falsification. Models with probability leakage cannot be calibrated empirically. Regression models, which are ubiquitous in statistical practice, often evince probability leakage.

  13. Panel presentation: Should some type of incentive regulation replace traditional methods for regulating LDC's?

    International Nuclear Information System (INIS)

    Farman, R.D.

    1992-01-01

    This paper discusses the wants and fears of gas utility companies with regards to incentive regulation. The idea of replacing the traditional rate-of-return regulation with incentive regulation sound very desirous in that it should provide greater management flexibility, quicker and more streamlined regulatory processes, and utility financial rewards based on how well customer needs are met. However, the main fear is that this could result in arbitrary, inappropriate productivity or efficiency targets, or would embody a risk/reward ratio skewed more heavily toward financial penalties than opportunities to increase earnings. The paper presents some of the obstacles of traditional regulation which include a lack of incentive to minimize operational costs; a lack of incentive to introduce new technology, products, or services; prevent the need for flexibility to compete in contestable markets; and the diversion caused by utility managers having to manage the regulatory process rather than delivering value to customers. The paper concludes by comparing the incentive regulation program used in the telecommunications industry to the natural gas industry to demonstrate why the success of the telecommunications model doesn't apply to the gas utilities incentive model

  14. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  15. Paradoxes in probability theory

    CERN Document Server

    Eckhardt, William

    2013-01-01

    Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory.  Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies.  Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.

  16. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  17. Rapid detection of coliforms in drinking water of Arak city using multiplex PCR method in comparison with the standard method of culture (Most Probably Number

    Directory of Open Access Journals (Sweden)

    Dehghan fatemeh

    2014-05-01

    Conclusions: Multiplex PCR method with shortened operation time was used for the simultaneous detection of total coliforms and Escherichia coli in distribution system of Arak city. It's recommended to be used at least as an initial screening test, and then the positive samples could be randomly tested by MPN.

  18. Apparatus and test method for characterizing the temperature regulating properties of thermal functional porous polymeric materials.

    Science.gov (United States)

    Yao, Bao-Guo; Zhang, Shan; Zhang, De-Pin

    2017-05-01

    In order to evaluate the temperature regulating properties of thermal functional porous polymeric materials such as fabrics treated with phase change material microcapsules, a new apparatus was developed. The apparatus and the test method can measure the heat flux, temperature, and displacement signals during the dynamic contact and then quickly give an evaluation for the temperature regulating properties by simulating the dynamic heat transfer and temperature regulating process when the materials contact the body skin. A series of indices including the psychosensory intensity, regulating capability index, and relative regulating index were defined to characterize the temperature regulating properties. The measurement principle, the evaluation criteria and grading method, the experimental setup and the test results discussion, and the gage capability analysis of the apparatus are presented. The new apparatus provides a method for the objective measurement and evaluation of the temperature regulating properties of thermal functional porous polymeric materials.

  19. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  20. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  1. Assessing representativeness of sampling methods for reaching men who have sex with men: a direct comparison of results obtained from convenience and probability samples.

    Science.gov (United States)

    Schwarcz, Sandra; Spindler, Hilary; Scheer, Susan; Valleroy, Linda; Lansky, Amy

    2007-07-01

    Convenience samples are used to determine HIV-related behaviors among men who have sex with men (MSM) without measuring the extent to which the results are representative of the broader MSM population. We compared results from a cross-sectional survey of MSM recruited from gay bars between June and October 2001 to a random digit dial telephone survey conducted between June 2002 and January 2003. The men in the probability sample were older, better educated, and had higher incomes than men in the convenience sample, the convenience sample enrolled more employed men and men of color. Substance use around the time of sex was higher in the convenience sample but other sexual behaviors were similar. HIV testing was common among men in both samples. Periodic validation, through comparison of data collected by different sampling methods, may be useful when relying on survey data for program and policy development.

  2. Applied probability and stochastic processes

    CERN Document Server

    Sumita, Ushio

    1999-01-01

    Applied Probability and Stochastic Processes is an edited work written in honor of Julien Keilson. This volume has attracted a host of scholars in applied probability, who have made major contributions to the field, and have written survey and state-of-the-art papers on a variety of applied probability topics, including, but not limited to: perturbation method, time reversible Markov chains, Poisson processes, Brownian techniques, Bayesian probability, optimal quality control, Markov decision processes, random matrices, queueing theory and a variety of applications of stochastic processes. The book has a mixture of theoretical, algorithmic, and application chapters providing examples of the cutting-edge work that Professor Keilson has done or influenced over the course of his highly-productive and energetic career in applied probability and stochastic processes. The book will be of interest to academic researchers, students, and industrial practitioners who seek to use the mathematics of applied probability i...

  3. Modeling Transport in Fractured Porous Media with the Random-Walk Particle Method: The Transient Activity Range and the Particle-Transfer Probability

    International Nuclear Information System (INIS)

    Lehua Pan; G.S. Bodvarsson

    2001-01-01

    Multiscale features of transport processes in fractured porous media make numerical modeling a difficult task, both in conceptualization and computation. Modeling the mass transfer through the fracture-matrix interface is one of the critical issues in the simulation of transport in a fractured porous medium. Because conventional dual-continuum-based numerical methods are unable to capture the transient features of the diffusion depth into the matrix (unless they assume a passive matrix medium), such methods will overestimate the transport of tracers through the fractures, especially for the cases with large fracture spacing, resulting in artificial early breakthroughs. We have developed a new method for calculating the particle-transfer probability that can capture the transient features of diffusion depth into the matrix within the framework of the dual-continuum random-walk particle method (RWPM) by introducing a new concept of activity range of a particle within the matrix. Unlike the multiple-continuum approach, the new dual-continuum RWPM does not require using additional grid blocks to represent the matrix. It does not assume a passive matrix medium and can be applied to the cases where global water flow exists in both continua. The new method has been verified against analytical solutions for transport in the fracture-matrix systems with various fracture spacing. The calculations of the breakthrough curves of radionuclides from a potential repository to the water table in Yucca Mountain demonstrate the effectiveness of the new method for simulating 3-D, mountain-scale transport in a heterogeneous, fractured porous medium under variably saturated conditions

  4. Quantum probability measures and tomographic probability densities

    NARCIS (Netherlands)

    Amosov, GG; Man'ko, [No Value

    2004-01-01

    Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the

  5. 概率密度函数法研究重构吸引子的结构%Probability Density Function Method for Observing Reconstructed Attractor Structure

    Institute of Scientific and Technical Information of China (English)

    陆宏伟; 陈亚珠; 卫青

    2004-01-01

    Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men.PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor.To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure.Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6-6.5 dimensional complex dynamical systems.It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough.A cluster effect mechanism is presented to explain this phenomenon.By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated.Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.

  6. Review of pipe-break probability assessment methods and data for applicability to the advanced neutron source project for Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Fullwood, R.R.

    1989-04-01

    The Advanced Neutron Source (ANS) (Difilippo, 1986; Gamble, 1986; West, 1986; Selby, 1987) will be the world's best facility for low energy neutron research. This performance requires the highest flux density of all non-pulsed reactors with concomitant low thermal inertial and fast response to upset conditions. One of the primary concerns is that a flow cessation of the order of a second may result in fuel damage. Such a flow stoppage could be the result of break in the primary piping. This report is a review of methods for assessing pipe break probabilities based on historical operating experience in power reactors, scaling methods, fracture mechanics and fracture growth models. The goal of this work is to develop parametric guidance for the ANS design to make the event highly unlikely. It is also to review and select methods that may be used in an interactive IBM-PC model providing fast and reasonably accurate models to aid the ANS designers in achieving the safety requirements. 80 refs., 7 figs

  7. System and method for regulating EGR cooling using a rankine cycle

    Science.gov (United States)

    Ernst, Timothy C.; Morris, Dave

    2015-12-22

    This disclosure relates to a waste heat recovery (WHR) system and method for regulating exhaust gas recirculation (EGR) cooling, and more particularly, to a Rankine cycle WHR system and method, including a recuperator bypass arrangement to regulate EGR exhaust gas cooling for engine efficiency improvement and thermal management. This disclosure describes other unique bypass arrangements for increased flexibility in the ability to regulate EGR exhaust gas cooling.

  8. 76 FR 5319 - Regulation of Fuel and Fuel Additives: Alternative Test Method for Olefins in Gasoline

    Science.gov (United States)

    2011-01-31

    ... Regulation of Fuel and Fuel Additives: Alternative Test Method for Olefins in Gasoline AGENCY: Environmental... gasoline. This proposed rule will provide flexibility to the regulated community by allowing an additional... A. Alternative Test Method for Olefins in Gasoline III. Statutory and Executive Order Reviews A...

  9. 76 FR 65382 - Regulation of Fuel and Fuel Additives: Alternative Test Method for Olefins in Gasoline

    Science.gov (United States)

    2011-10-21

    ... Regulation of Fuel and Fuel Additives: Alternative Test Method for Olefins in Gasoline AGENCY: Environmental... gasoline. This final rule will provide flexibility to the regulated community by allowing an additional... Method for Olefins in Gasoline III. Statutory and Executive Order Reviews A. Executive Order 12866...

  10. 26 CFR 1.852-4 - Method of taxation of shareholders of regulated investment companies.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Method of taxation of shareholders of regulated investment companies. 1.852-4 Section 1.852-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE... Investment Trusts § 1.852-4 Method of taxation of shareholders of regulated investment companies. (a...

  11. 26 CFR 1.852-2 - Method of taxation of regulated investment companies.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Method of taxation of regulated investment... Trusts § 1.852-2 Method of taxation of regulated investment companies. (a) Imposition of normal tax and... for partially tax-exempt interest provided by section 242. (b) Taxation of capital gains—(1) In...

  12. Comparison of the quantitative dry culture methods with both conventional media and most probable number method for the enumeration of coliforms and Escherichia coli/coliforms in food.

    Science.gov (United States)

    Teramura, H; Sota, K; Iwasaki, M; Ogihara, H

    2017-07-01

    Sanita-kun™ CC (coliform count) and EC (Escherichia coli/coliform count), sheet quantitative culture systems which can avoid chromogenic interference by lactase in food, were evaluated in comparison with conventional methods for these bacteria. Based on the results of inclusivity and exclusivity studies using 77 micro-organisms, sensitivity and specificity of both Sanita-kun™ met the criteria for ISO 16140. Both media were compared with deoxycholate agar, violet red bile agar, Merck Chromocult™ coliform agar (CCA), 3M Petrifilm™ CC and EC (PEC) and 3-tube MPN, as reference methods, in 100 naturally contaminated food samples. The correlation coefficients of both Sanita-kun™ for coliform detection were more than 0·95 for all comparisons. For E. coli detection, Sanita-kun™ EC was compared with CCA, PEC and MPN in 100 artificially contaminated food samples. The correlation coefficients for E. coli detection of Sanita-kun™ EC were more than 0·95 for all comparisons. There were no significant differences in all comparisons when conducting a one-way analysis of variance (anova). Both Sanita-kun™ significantly inhibited colour interference by lactase when inhibition of enzymatic staining was assessed using 40 natural cheese samples spiked with coliform. Our results demonstrated Sanita-kun™ CC and EC are suitable alternatives for the enumeration of coliforms and E. coli/coliforms, respectively, in a variety of foods, and specifically in fermented foods. Current chromogenic media for coliforms and Escherichia coli/coliforms have enzymatic coloration due to breaking down of chromogenic substrates by food lactase. The novel sheet culture media which have film layer to avoid coloration by food lactase have been developed for enumeration of coliforms and E. coli/coliforms respectively. In this study, we demonstrated these media had comparable performance with reference methods and less interference by food lactase. These media have a possibility not only

  13. Contributions to quantum probability

    International Nuclear Information System (INIS)

    Fritz, Tobias

    2010-01-01

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a finite set can occur as the outcome

  14. Contributions to quantum probability

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Tobias

    2010-06-25

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a

  15. Toward a generalized probability theory: conditional probabilities

    International Nuclear Information System (INIS)

    Cassinelli, G.

    1979-01-01

    The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalized theory. (Auth.)

  16. Multicritical phase diagrams of the Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: the pair approximation and the path probability method with pair distribution

    International Nuclear Information System (INIS)

    Keskin, Mustafa; Erdinc, Ahmet

    2004-01-01

    As a continuation of the previously published work, the pair approximation of the cluster variation method is applied to study the temperature dependences of the order parameters of the Blume-Emery-Griffiths model with repulsive biquadratic coupling on a body centered cubic lattice. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. We study the dynamics of the model by the path probability method with pair distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagram in addition to the equilibrium phase diagram and also the first-order phase transition line for the unstable branches of the quadrupole order parameter is superimposed on the phase diagrams. It is found that the metastable phase diagram and the first-order phase boundary for the unstable quadrupole order parameter always exist at the low temperatures which are consistent with experimental and theoretical works

  17. Sizing of Compression Coil Springs Gas Regulators Using Modern Methods CAD and CAE

    Directory of Open Access Journals (Sweden)

    Adelin Ionel Tuţă

    2010-10-01

    Full Text Available This paper presents a method for compression coil springs sizing by gas regulators composition, using CAD techniques (Computer Aided Design and CAE (Computer Aided Engineering. Sizing is to optimize the functioning of the regulators under dynamic industrial and house-hold. Gas regulator is a device that automatically and continuously adjusted to maintain pre-set limits on output gas pressure at varying flow and input pressure. The performances of the pressure regulators like automatic systems depend on their behaviour under dynamic opera-tion. Time constant optimization of pneumatic actuators, which drives gas regulators, leads to a better functioning under their dynamic.

  18. A comparison of entropy balance and probability weighting methods to generalize observational cohorts to a population: a simulation and empirical example.

    Science.gov (United States)

    Harvey, Raymond A; Hayden, Jennifer D; Kamble, Pravin S; Bouchard, Jonathan R; Huang, Joanna C

    2017-04-01

    We compared methods to control bias and confounding in observational studies including inverse probability weighting (IPW) and stabilized IPW (sIPW). These methods often require iteration and post-calibration to achieve covariate balance. In comparison, entropy balance (EB) optimizes covariate balance a priori by calibrating weights using the target's moments as constraints. We measured covariate balance empirically and by simulation by using absolute standardized mean difference (ASMD), absolute bias (AB), and root mean square error (RMSE), investigating two scenarios: the size of the observed (exposed) cohort exceeds the target (unexposed) cohort and vice versa. The empirical application weighted a commercial health plan cohort to a nationally representative National Health and Nutrition Examination Survey target on the same covariates and compared average total health care cost estimates across methods. Entropy balance alone achieved balance (ASMD ≤ 0.10) on all covariates in simulation and empirically. In simulation scenario I, EB achieved the lowest AB and RMSE (13.64, 31.19) compared with IPW (263.05, 263.99) and sIPW (319.91, 320.71). In scenario II, EB outperformed IPW and sIPW with smaller AB and RMSE. In scenarios I and II, EB achieved the lowest mean estimate difference from the simulated population outcome ($490.05, $487.62) compared with IPW and sIPW, respectively. Empirically, only EB differed from the unweighted mean cost indicating IPW, and sIPW weighting was ineffective. Entropy balance demonstrated the bias-variance tradeoff achieving higher estimate accuracy, yet lower estimate precision, compared with IPW methods. EB weighting required no post-processing and effectively mitigated observed bias and confounding. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Functional Imaging of Autonomic Regulation: Methods and Key Findings

    Directory of Open Access Journals (Sweden)

    Paul M Macey

    2016-01-01

    Full Text Available Central nervous system processing of autonomic function involves a network of regions throughout the brain which can be visualized and measured with neuroimaging techniques, notably functional magnetic resonance imaging (fMRI. The development of fMRI procedures has both confirmed and extended earlier findings from animal models, and human stroke and lesion studies. Assessments with fMRI can elucidate interactions between different central sites in regulating normal autonomic patterning, and demonstrate how disturbed systems can interact to produce aberrant regulation during autonomic challenges. Understanding autonomic dysfunction in various illnesses reveals mechanisms that potentially lead to interventions in the impairments. The objectives here are to: 1 describe the fMRI neuroimaging methodology for assessment of autonomic neural control, 2 outline the widespread, lateralized distribution of function in autonomic sites in the normal brain which includes structures from the neocortex through the medulla and cerebellum, 3 illustrate the importance of the time course of neural changes when coordinating responses, and how those patterns are impacted in conditions of sleep-disordered breathing, and 4 highlight opportunities for future research studies with emerging methodologies. Methodological considerations specific to autonomic testing include timing of challenges relative to the underlying fMRI signal, spatial resolution sufficient to identify autonomic brainstem nuclei, blood pressure and blood oxygenation influences on the fMRI signal, and the sustained timing, often measured in minutes of challenge periods and recovery. Key findings include the lateralized nature of autonomic organization, which is reminiscent of asymmetric motor, sensory and language pathways. Testing brain function during autonomic challenges demonstrate closely-integrated timing of responses in connected brain areas during autonomic challenges, and the involvement with

  20. [Theory analysis and clinical application of spirit-regulating and pain-relieving acupuncture method].

    Science.gov (United States)

    Chen, Liang; Tang, Lewei; Du, Huaibin; Zheng, Hui; Liang, Fanrong

    2015-04-01

    The theoretical foundation and scientific connotation of spirit-regulating and pain-relieving acupuncture method as well as its clinical application for pain are discussed. During spirit regulation, attention should be paid on regulating heart and brain, while acupoints should be selected mainly from the Heart Meridian, Pericardium Meridian and Governor Vessel. It has significant efficacy for refractory pain in clinical treatment. Spirit-regulating and pain-relieving acupuncture method is development of acupuncture treating spirit, and it is an important method for pain in clinic. Improvement on sensitization of pain center and brain function is considered as one of the mechanisms in spirit-regulating and pain-relieving acupuncture method.

  1. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  2. Quantifying Hydroperiod, Fire and Nutrient Effects on the Composition of Plant Communities in Marl Prairie of the Everglades: a Joint Probability Method Based Model

    Science.gov (United States)

    Zhai, L.

    2017-12-01

    Plant community can be simultaneously affected by human activities and climate changes, and quantifying and predicting this combined effect on plant community by appropriate model framework which is validated by field data is complex, but very useful to conservation management. Plant communities in the Everglades provide an unique set of conditions to develop and validate this model framework, because they are both experiencing intensive effects of human activities (such as changing hydroperiod by drainage and restoration projects, nutrients from upstream agriculture, prescribed fire, etc.) and climate changes (such as warming, changing precipitation patter, sea level rise, etc.). More importantly, previous research attention focuses on plant communities in slough ecosystem (including ridge, slough and their tree islands), very few studies consider the marl prairie ecosystem. Comparing with slough ecosystem featured by remaining consistently flooded almost year-round, marl prairie has relatively shorter hydroperiod (just in wet-season of one year). Therefore, plant communities of marl prairie may receive more impacts from hydroperiod change. In addition to hydroperiod, fire and nutrients also affect the plant communities in the marl prairie. Therefore, to quantify the combined effects of water level, fire, and nutrients on the composition of the plant communities, we are developing a joint probability method based vegetation dynamic model. Further, the model is being validated by field data about changes of vegetation assemblage along environmental gradients in the marl prairie. Our poster showed preliminary data from our current project.

  3. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-01-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear–quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18–30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8–30.9 Gy) and 22.0 Gy (range, 20.2–26.6 Gy), respectively. By use of conventional values for α/β, volume parameter n, 50% complication probability dose TD 50 , and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of α/β and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of α/β and n yielded better predictions (0.7 complications), with n = 0.023 and α/β = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high α/β value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models traditionally used to estimate spinal cord NTCP

  4. Introduction to probability theory with contemporary applications

    CERN Document Server

    Helms, Lester L

    2010-01-01

    This introduction to probability theory transforms a highly abstract subject into a series of coherent concepts. Its extensive discussions and clear examples, written in plain language, expose students to the rules and methods of probability. Suitable for an introductory probability course, this volume requires abstract and conceptual thinking skills and a background in calculus.Topics include classical probability, set theory, axioms, probability functions, random and independent random variables, expected values, and covariance and correlations. Additional subjects include stochastic process

  5. Philosophical theories of probability

    CERN Document Server

    Gillies, Donald

    2000-01-01

    The Twentieth Century has seen a dramatic rise in the use of probability and statistics in almost all fields of research. This has stimulated many new philosophical ideas on probability. Philosophical Theories of Probability is the first book to present a clear, comprehensive and systematic account of these various theories and to explain how they relate to one another. Gillies also offers a distinctive version of the propensity theory of probability, and the intersubjective interpretation, which develops the subjective theory.

  6. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned

  7. Interpretations of probability

    CERN Document Server

    Khrennikov, Andrei

    2009-01-01

    This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

  8. Probability and rational choice

    Directory of Open Access Journals (Sweden)

    David Botting

    2014-05-01

    Full Text Available http://dx.doi.org/10.5007/1808-1711.2014v18n1p1 In this paper I will discuss the rationality of reasoning about the future. There are two things that we might like to know about the future: which hypotheses are true and what will happen next. To put it in philosophical language, I aim to show that there are methods by which inferring to a generalization (selecting a hypothesis and inferring to the next instance (singular predictive inference can be shown to be normative and the method itself shown to be rational, where this is due in part to being based on evidence (although not in the same way and in part on a prior rational choice. I will also argue that these two inferences have been confused, being distinct not only conceptually (as nobody disputes but also in their results (the value given to the probability of the hypothesis being not in general that given to the next instance and that methods that are adequate for one are not by themselves adequate for the other. A number of debates over method founder on this confusion and do not show what the debaters think they show.

  9. Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets

    International Nuclear Information System (INIS)

    Forsberg, Kaj; Fritz, Peter

    2003-11-01

    The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries

  10. Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets

    Energy Technology Data Exchange (ETDEWEB)

    Forsberg, Kaj; Fritz, Peter

    2003-11-01

    The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries.

  11. Evaluation of carrier collection probability in bifacial interdigitated-back-contact crystalline silicon solar cells by the internal quantum efficiency mapping method

    Science.gov (United States)

    Tachibana, Tomihisa; Tanahashi, Katsuto; Mochizuki, Toshimitsu; Shirasawa, Katsuhiko; Takato, Hidetaka

    2018-04-01

    Bifacial interdigitated-back-contact (IBC) silicon solar cells with a high bifaciality of 0.91 were fabricated. Screen printing and firing technology were used to reduce the production cost. For the first time, the relationship between the rear side structure and carrier collection probability was evaluated using internal quantum efficiency (IQE) mapping. The measurement results showed that the screen-printed electrode and back surface field (BSF) area led to low IQE. The low carrier collection probability by BSF area can be explained by electrical shading effects. Thus, it is clear that the IQE mapping system is useful to evaluate the IBC cell.

  12. Foundations of probability

    International Nuclear Information System (INIS)

    Fraassen, B.C. van

    1979-01-01

    The interpretation of probabilities in physical theories are considered, whether quantum or classical. The following points are discussed 1) the functions P(μ, Q) in terms of which states and propositions can be represented, are classical (Kolmogoroff) probabilities, formally speaking, 2) these probabilities are generally interpreted as themselves conditional, and the conditions are mutually incompatible where the observables are maximal and 3) testing of the theory typically takes the form of confronting the expectation values of observable Q calculated with probability measures P(μ, Q) for states μ; hence, of comparing the probabilities P(μ, Q)(E) with the frequencies of occurrence of the corresponding events. It seems that even the interpretation of quantum mechanics, in so far as it concerns what the theory says about the empirical (i.e. actual, observable) phenomena, deals with the confrontation of classical probability measures with observable frequencies. This confrontation is studied. (Auth./C.F.)

  13. The quantum probability calculus

    International Nuclear Information System (INIS)

    Jauch, J.M.

    1976-01-01

    The Wigner anomaly (1932) for the joint distribution of noncompatible observables is an indication that the classical probability calculus is not applicable for quantum probabilities. It should, therefore, be replaced by another, more general calculus, which is specifically adapted to quantal systems. In this article this calculus is exhibited and its mathematical axioms and the definitions of the basic concepts such as probability field, random variable, and expectation values are given. (B.R.H)

  14. Handbook of probability

    CERN Document Server

    Florescu, Ionut

    2013-01-01

    THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

  15. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  16. Knowledge typology for imprecise probabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G. D. (Gregory D.); Zucker, L. J. (Lauren J.)

    2002-01-01

    When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

  17. Statistical probability tables CALENDF program

    International Nuclear Information System (INIS)

    Ribon, P.

    1989-01-01

    The purpose of the probability tables is: - to obtain dense data representation - to calculate integrals by quadratures. They are mainly used in the USA for calculations by Monte Carlo and in the USSR and Europe for self-shielding calculations by the sub-group method. The moment probability tables, in addition to providing a more substantial mathematical basis and calculation methods, are adapted for condensation and mixture calculations, which are the crucial operations for reactor physics specialists. However, their extension is limited by the statistical hypothesis they imply. Efforts are being made to remove this obstacle, at the cost, it must be said, of greater complexity

  18. Effect of Methods of Learning and Self Regulated Learning toward Outcomes of Learning Social Studies

    Science.gov (United States)

    Tjalla, Awaluddin; Sofiah, Evi

    2015-01-01

    This research aims to reveal the influence of learning methods and self-regulated learning on students learning scores for Social Studies object. The research was done in Islamic Junior High School (MTs Manba'ul Ulum), Batuceper City Tangerang using quasi-experimental method. The research employed simple random technique to 28 students. Data were…

  19. Qualification of the calculational methods of the fluence in the pressurised water reactors. Improvement of the cross sections treatment by the probability table method; Qualification des methodes de calculs de fluence dans les reacteurs a eau pressurisee. Amelioration du traitement des sections efficaces par la methode des tables de probabilite

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, S H

    1994-01-01

    It is indispensable to know the fluence on the nuclear reactor pressure vessel. The cross sections and their treatment have an important rule to this problem. In this study, two ``benchmarks`` have been interpreted by the Monte Carlo transport program TRIPOLI to qualify the calculational method and the cross sections used in the calculations. For the treatment of the cross sections, the multigroup method is usually used but it exists some problems such as the difficulty to choose the weighting function and the necessity of a great number of energy to represent well the cross section`s fluctuation. In this thesis, we propose a new method called ``Probability Table Method`` to treat the neutron cross sections. For the qualification, a program of the simulation of neutron transport by the Monte Carlo method in one dimension has been written; the comparison of multigroup`s results and probability table`s results shows the advantages of this new method. The probability table has also been introduced in the TRIPOLI program; the calculational results of the iron deep penetration benchmark has been improved by comparing with the experimental results. So it is interest to use this new method in the shielding and neutronic calculation. (author). 42 refs., 109 figs., 36 tabs.

  20. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  1. Probability, Nondeterminism and Concurrency

    DEFF Research Database (Denmark)

    Varacca, Daniele

    Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particula...

  2. Comparing the mannitol-egg yolk-polymyxin agar plating method with the three-tube most-probable-number method for enumeration of Bacillus cereus spores in raw and high-temperature, short-time pasteurized milk.

    Science.gov (United States)

    Harper, Nigel M; Getty, Kelly J K; Schmidt, Karen A; Nutsch, Abbey L; Linton, Richard H

    2011-03-01

    The U.S. Food and Drug Administration's Bacteriological Analytical Manual recommends two enumeration methods for Bacillus cereus: (i) standard plate count method with mannitol-egg yolk-polymyxin (MYP) agar and (ii) a most-probable-number (MPN) method with tryptic soy broth (TSB) supplemented with 0.1% polymyxin sulfate. This study compared the effectiveness of MYP and MPN methods for detecting and enumerating B. cereus in raw and high-temperature, short-time pasteurized skim (0.5%), 2%, and whole (3.5%) bovine milk stored at 4°C for 96 h. Each milk sample was inoculated with B. cereus EZ-Spores and sampled at 0, 48, and 96 h after inoculation. There were no differences (P > 0.05) in B. cereus populations among sampling times for all milk types, so data were pooled to obtain overall mean values for each treatment. The overall B. cereus population mean of pooled sampling times for the MPN method (2.59 log CFU/ml) was greater (P milk samples ranged from 2.36 to 3.46 and 2.66 to 3.58 log CFU/ml for inoculated milk treatments for the MYP plate count and MPN methods, respectively, which is below the level necessary for toxin production. The MPN method recovered more B. cereus, which makes it useful for validation research. However, the MYP plate count method for enumeration of B. cereus also had advantages, including its ease of use and faster time to results (2 versus 5 days for MPN).

  3. Janus-faced probability

    CERN Document Server

    Rocchi, Paolo

    2014-01-01

    The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.

  4. Methods for growth regulation of greenhouse produced ornamental pot- and bedding plants – a current review

    Directory of Open Access Journals (Sweden)

    Bergstrand Karl-Johan I.

    2017-06-01

    Full Text Available Chemical plant growth regulators (PGRs are used in the production of ornamental potted and bedding plants. Growth control is needed for maximizing production per unit area, reducing transportation costs and to obtain a desired visual quality. However, the use of PGRs is associated with toxicity risks to humans and the environment. In many countries the availability of PGRs is restricted as few substances are registered for use. A number of alternative methods have been suggested. The methods include genetic methods (breeding and crop cultivation practices such as fertigation, temperature and light management. A lot of research into “alternative” growth regulation was performed during the 1980-1990s, revealing several possible ways of using different climatic factors to optimize plant growth with respect to plant height. In recent years, the interest in climatic growth regulation has been resurrected, not least due to the coming phase-out of the plant growth regulator chlormequat chloride (CCC. Today, authorities in many countries are aiming towards reducing the use of agrochemicals. At the same time, there is a strong demand from consumers for products produced without chemicals. This article provides a broad overview of available methods for non-chemical growth control. It is concluded that a combination of plant breeding and management of temperature, fertigation and light management has the potential of replacing chemical growth regulators in the commercial production of ornamental pot- and bedding plants.

  5. DECOFF Probabilities of Failed Operations

    DEFF Research Database (Denmark)

    Gintautas, Tomas

    2015-01-01

    A statistical procedure of estimation of Probabilities of Failed Operations is described and exemplified using ECMWF weather forecasts and SIMO output from Rotor Lift test case models. Also safety factor influence is investigated. DECOFF statistical method is benchmarked against standard Alpha-factor...

  6. Probability and Measure

    CERN Document Server

    Billingsley, Patrick

    2012-01-01

    Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this

  7. The concept of probability

    International Nuclear Information System (INIS)

    Bitsakis, E.I.; Nicolaides, C.A.

    1989-01-01

    The concept of probability is now, and always has been, central to the debate on the interpretation of quantum mechanics. Furthermore, probability permeates all of science, as well as our every day life. The papers included in this volume, written by leading proponents of the ideas expressed, embrace a broad spectrum of thought and results: mathematical, physical epistemological, and experimental, both specific and general. The contributions are arranged in parts under the following headings: Following Schroedinger's thoughts; Probability and quantum mechanics; Aspects of the arguments on nonlocality; Bell's theorem and EPR correlations; Real or Gedanken experiments and their interpretation; Questions about irreversibility and stochasticity; and Epistemology, interpretation and culture. (author). refs.; figs.; tabs

  8. Hath1 inhibits proliferation of colon cancer cells probably through up-regulating expression of Muc2 and p27 and down-regulating expression of cyclin D1.

    Science.gov (United States)

    Zhu, Dai-Hua; Niu, Bai-Lin; Du, Hui-Min; Ren, Ke; Sun, Jian-Ming; Gong, Jian-Ping

    2012-01-01

    Previous studies showed that Math1 homologous to human Hath1 can cause mouse goblet cells to differentiate. In this context it is important that the majority of colon cancers have few goblet cells. In the present study, the potential role of Hath1 in colon carcinogenesis was investigated. Sections of paraffin-embedded tissues were used to investigate the goblet cell population of normal colon mucosa, mucosa adjacent colon cancer and colon cancer samples from 48 patients. Hath1 and Muc2 expression in these samples were tested by immunohistochemistry, quantitative real-time reverse transcription -PCR and Western blotting. After the recombinant plasmid, pcDNA3.1(+)-Hath1 had been transfected into HT29 colon cancer cells, three clones were selected randomly to test the levels of Hath1 mRNA, Muc2 mRNA, Hath1, Muc2, cyclin D1 and p27 by quantitative real-time reverse transcription-PCR and Western blotting. Moreover, the proliferative ability of HT29 cells introduced with Hath1 was assessed by means of colony formation assay and xenografting. Expression of Hath1, Muc2, cyclin D1 and p27 in the xenograft tumors was also detected by Western blotting. No goblet cells were to be found in colon cancer and levels of Hath1 mRNA and Hath1, Muc2 mRNA and Muc2 were significantly down-regulated. Hath1 could decrease cyclin D1, increase p27 and Muc2 in HT29 cells and inhibit their proliferation. Hath1 may be an anti-oncogene in colon carcinogenesis.

  9. Concepts of probability theory

    CERN Document Server

    Pfeiffer, Paul E

    1979-01-01

    Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.

  10. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  11. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  12. Probabilities in physics

    CERN Document Server

    Hartmann, Stephan

    2011-01-01

    Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...

  13. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  14. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  15. Probability in quantum mechanics

    Directory of Open Access Journals (Sweden)

    J. G. Gilson

    1982-01-01

    Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.

  16. Quantum computing and probability.

    Science.gov (United States)

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  17. Quantum computing and probability

    International Nuclear Information System (INIS)

    Ferry, David K

    2009-01-01

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction. (viewpoint)

  18. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Studies on the radioactive contamination due to nuclear detonations III. On the method of estimating the probable time of nuclear detonation from the measurements of gross-activity

    Energy Technology Data Exchange (ETDEWEB)

    Nishiwaki, Yasushi [Nuclear Reactor Laboratory, Tokyo Institute of Technology, Tokyo (Japan); Nuclear Reactor Laboratoroy, Kinki University, Fuse City, Osaka Precture (Japan)

    1961-11-25

    Since it has been observed in Spring of 1954 that a considerable amount of fission products mixture fell with the rain following a large scale nuclear detonation conducted in Bikini area in the South Pacific by the United States Atomic Energy Commission, it has become important, especially from the health physics standpoint, to estimate the effective average age of the fission products mixture after the nuclear detonation. If the energy transferred to the atmospheric air at the time of nuclear detonation is large enough (order of megaton at the distance of about 4000 km), the probable time and test site of nuclear detonation may be estimated with considerable accuracy, from the records of the pressure wave caused by the detonation in the microbarographs at different meteorological stations. Even in this case, in order to estimate the possible correlation between the artificial radioactivity observed in the rain and the probable detonation, it is often times desirable to estimate the effective age of the fission products mixture in the rain from the decay measurement of the radioactivity.

  20. Studies on the radioactive contamination due to nuclear detonations III. On the method of estimating the probable time of nuclear detonation from the measurements of gross-activity

    International Nuclear Information System (INIS)

    Nishiwaki, Yasushi

    1961-01-01

    Since it has been observed in Spring of 1954 that a considerable amount of fission products mixture fell with the rain following a large scale nuclear detonation conducted in Bikini area in the South Pacific by the United States Atomic Energy Commission, it has become important, especially from the health physics standpoint, to estimate the effective average age of the fission products mixture after the nuclear detonation. If the energy transferred to the atmospheric air at the time of nuclear detonation is large enough (order of megaton at the distance of about 4000 km), the probable time and test site of nuclear detonation may be estimated with considerable accuracy, from the records of the pressure wave caused by the detonation in the microbarographs at different meteorological stations. Even in this case, in order to estimate the possible correlation between the artificial radioactivity observed in the rain and the probable detonation, it is often times desirable to estimate the effective age of the fission products mixture in the rain from the decay measurement of the radioactivity

  1. Multicritical phase diagrams of the ferromagnetic spin-3/2 Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: The cluster variation method and the path probability method with the point distribution

    Energy Technology Data Exchange (ETDEWEB)

    Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)], E-mail: keskin@erciyes.edu.tr; Canko, Osman [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)

    2008-01-15

    We study the thermal variations of the ferromagnetic spin-3/2 Blume-Emery-Griffiths (BEG) model with repulsive biquadratic coupling by using the lowest approximation of the cluster variation method (LACVM) in the absence and presence of the external magnetic field. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. The classification of the stable, metastable and unstable states is made by comparing the free energy values of these states. We also study the dynamics of the model by using the path probability method (PPM) with the point distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagrams in addition to the equilibrium phase diagrams in the (kT/J, K/J) and (kT/J, D/J) planes. It is found that the metastable phase diagrams always exist at the low temperatures, which are consistent with experimental and theoretical works.

  2. Multicritical phase diagrams of the ferromagnetic spin-3/2 Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: The cluster variation method and the path probability method with the point distribution

    International Nuclear Information System (INIS)

    Keskin, Mustafa; Canko, Osman

    2008-01-01

    We study the thermal variations of the ferromagnetic spin-3/2 Blume-Emery-Griffiths (BEG) model with repulsive biquadratic coupling by using the lowest approximation of the cluster variation method (LACVM) in the absence and presence of the external magnetic field. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. The classification of the stable, metastable and unstable states is made by comparing the free energy values of these states. We also study the dynamics of the model by using the path probability method (PPM) with the point distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagrams in addition to the equilibrium phase diagrams in the (kT/J, K/J) and (kT/J, D/J) planes. It is found that the metastable phase diagrams always exist at the low temperatures, which are consistent with experimental and theoretical works

  3. The perception of probability.

    Science.gov (United States)

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  4. Is probability of frequency too narrow?

    International Nuclear Information System (INIS)

    Martz, H.F.

    1993-01-01

    Modern methods of statistical data analysis, such as empirical and hierarchical Bayesian methods, should find increasing use in future Probabilistic Risk Assessment (PRA) applications. In addition, there will be a more formalized use of expert judgment in future PRAs. These methods require an extension of the probabilistic framework of PRA, in particular, the popular notion of probability of frequency, to consideration of frequency of frequency, frequency of probability, and probability of probability. The genesis, interpretation, and examples of these three extended notions are discussed

  5. Irreversibility and conditional probability

    International Nuclear Information System (INIS)

    Stuart, C.I.J.M.

    1989-01-01

    The mathematical entropy - unlike physical entropy - is simply a measure of uniformity for probability distributions in general. So understood, conditional entropies have the same logical structure as conditional probabilities. If, as is sometimes supposed, conditional probabilities are time-reversible, then so are conditional entropies and, paradoxically, both then share this symmetry with physical equations of motion. The paradox is, of course that probabilities yield a direction to time both in statistical mechanics and quantum mechanics, while the equations of motion do not. The supposed time-reversibility of both conditionals seems also to involve a form of retrocausality that is related to, but possibly not the same as, that described by Costa de Beaurgard. The retrocausality is paradoxically at odds with the generally presumed irreversibility of the quantum mechanical measurement process. Further paradox emerges if the supposed time-reversibility of the conditionals is linked with the idea that the thermodynamic entropy is the same thing as 'missing information' since this confounds the thermodynamic and mathematical entropies. However, it is shown that irreversibility is a formal consequence of conditional entropies and, hence, of conditional probabilities also. 8 refs. (Author)

  6. The pleasures of probability

    CERN Document Server

    Isaac, Richard

    1995-01-01

    The ideas of probability are all around us. Lotteries, casino gambling, the al­ most non-stop polling which seems to mold public policy more and more­ these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re­ moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac­ ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...

  7. Experimental Probability in Elementary School

    Science.gov (United States)

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  8. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out......-crossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval and thus for the first-passage probability...

  9. Improving Ranking Using Quantum Probability

    OpenAIRE

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

  10. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  11. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  12. Panel presentation: Should some type of incentive regulation replace traditional methods for LDC's?

    International Nuclear Information System (INIS)

    Richard, O.G.

    1992-01-01

    This paper discusses the problems with existing fixed-rate price regulation and how a deregulation of both the pipeline and gas utility companies are needed to enhance competition. The paper suggests alternative methods to traditional regulation which include a financial incentive package which allows or encourages utilities to make investments in more efficient energy management, to improve load factors to balance the energy demands between industrial and residential users, and reward purchases of gas supplies that out-perform an agreed upon level of rates of a cost index. Other incentive programs are proposed by the author with a relative detailed discussion on each topic

  13. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    2014-01-01

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...

  14. Introduction to imprecise probabilities

    CERN Document Server

    Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M

    2014-01-01

    In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin

  15. Classic Problems of Probability

    CERN Document Server

    Gorroochurn, Prakash

    2012-01-01

    "A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin

  16. Research on evaluation methods for water regulation ability of dams in the Huai River Basin

    Science.gov (United States)

    Shan, G. H.; Lv, S. F.; Ma, K.

    2016-08-01

    Water environment protection is a global and urgent problem that requires correct and precise evaluation. Evaluation methods have been studied for many years; however, there is a lack of research on the methods of assessing the water regulation ability of dams. Currently, evaluating the ability of dams has become a practical and significant research orientation because of the global water crisis, and the lack of effective ways to manage a dam's regulation ability has only compounded this. This paper firstly constructs seven evaluation factors and then develops two evaluation approaches to implement the factors according to the features of the problem. Dams of the Yin Shang ecological control section in the Huai He River basin are selected as an example to demonstrate the method. The results show that the evaluation approaches can produce better and more practical suggestions for dam managers.

  17. Optimizing Probability of Detection Point Estimate Demonstration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  18. Counterexamples in probability

    CERN Document Server

    Stoyanov, Jordan M

    2013-01-01

    While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.

  19. Epistemology and Probability

    CERN Document Server

    Plotnitsky, Arkady

    2010-01-01

    Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general

  20. Swedish earthquakes and acceleration probabilities

    International Nuclear Information System (INIS)

    Slunga, R.

    1979-03-01

    A method to assign probabilities to ground accelerations for Swedish sites is described. As hardly any nearfield instrumental data is available we are left with the problem of interpreting macroseismic data in terms of acceleration. By theoretical wave propagation computations the relation between seismic strength of the earthquake, focal depth, distance and ground accelerations are calculated. We found that most Swedish earthquake of the area, the 1904 earthquake 100 km south of Oslo, is an exception and probably had a focal depth exceeding 25 km. For the nuclear power plant sites an annual probability of 10 -5 has been proposed as interesting. This probability gives ground accelerations in the range 5-20 % for the sites. This acceleration is for a free bedrock site. For consistency all acceleration results in this study are given for bedrock sites. When applicating our model to the 1904 earthquake and assuming the focal zone to be in the lower crust we get the epicentral acceleration of this earthquake to be 5-15 % g. The results above are based on an analyses of macrosismic data as relevant instrumental data is lacking. However, the macroseismic acceleration model deduced in this study gives epicentral ground acceleration of small Swedish earthquakes in agreement with existent distant instrumental data. (author)

  1. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  2. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  3. A method for regulating strong nonlinear vibration energy of the flexible arm

    Directory of Open Access Journals (Sweden)

    Yushu Bian

    2015-07-01

    Full Text Available For an oscillating system, large amplitude indicates strong vibration energy. In this article, modal interaction is used as a useful means to regulate strong nonlinear vibration energy of the flexible arm undergoing rigid motion. A method is put forward to migrate and dissipate vibration energy based on modal interaction. By means of multiple-scale perturbation analysis, it is proven that internal resonance can be successfully established between modes of the flexible arm and the vibration absorber. Through examples and analyses, it is verified that this control method is effective in regulating strong vibration energy and can be used to suppress strong nonlinear vibration of the flexible arm undergoing rigid motion.

  4. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    Science.gov (United States)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  5. A probability space for quantum models

    Science.gov (United States)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  6. Numerical method for the solution of the regulator equation with application to nonlinear tracking

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav; Čelikovský, Sergej

    2008-01-01

    Roč. 44, č. 5 (2008), s. 1358-1365 ISSN 0005-1098 R&D Projects: GA ČR GP102/07/P413; GA ČR(CZ) GA102/08/0186 Institutional research plan: CEZ:AV0Z10750506 Keywords : nonlinear output regulation * finite-element method * optimization Subject RIV: BC - Control Systems Theory Impact factor: 3.178, year: 2008

  7. Evaluation 2000 and regulation and method. Release monitoring and environmental surveillance around Cea centers

    International Nuclear Information System (INIS)

    2001-06-01

    This publication counts for the year 2000 for the evaluation of liquid and gaseous radioactive effluents releases and the radioactivity levels measured in the vicinity of Cea centers, through the air, water, vegetation and milk surveillance. An analysis of the results from 1996 to 2000 allows to follow their evolution. A second booklet develops the sampling and measurement methods made on effluents in environment. It present besides the regulation applied to effluents monitoring. (N.C.)

  8. Waste Package Misload Probability

    International Nuclear Information System (INIS)

    Knudsen, J.K.

    2001-01-01

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a

  9. Probability theory and applications

    CERN Document Server

    Hsu, Elton P

    1999-01-01

    This volume, with contributions by leading experts in the field, is a collection of lecture notes of the six minicourses given at the IAS/Park City Summer Mathematics Institute. It introduces advanced graduates and researchers in probability theory to several of the currently active research areas in the field. Each course is self-contained with references and contains basic materials and recent results. Topics include interacting particle systems, percolation theory, analysis on path and loop spaces, and mathematical finance. The volume gives a balanced overview of the current status of probability theory. An extensive bibliography for further study and research is included. This unique collection presents several important areas of current research and a valuable survey reflecting the diversity of the field.

  10. Transcriptomic SNP discovery for custom genotyping arrays: impacts of sequence data, SNP calling method and genotyping technology on the probability of validation success.

    Science.gov (United States)

    Humble, Emily; Thorne, Michael A S; Forcada, Jaume; Hoffman, Joseph I

    2016-08-26

    Single nucleotide polymorphism (SNP) discovery is an important goal of many studies. However, the number of 'putative' SNPs discovered from a sequence resource may not provide a reliable indication of the number that will successfully validate with a given genotyping technology. For this it may be necessary to account for factors such as the method used for SNP discovery and the type of sequence data from which it originates, suitability of the SNP flanking sequences for probe design, and genomic context. To explore the relative importance of these and other factors, we used Illumina sequencing to augment an existing Roche 454 transcriptome assembly for the Antarctic fur seal (Arctocephalus gazella). We then mapped the raw Illumina reads to the new hybrid transcriptome using BWA and BOWTIE2 before calling SNPs with GATK. The resulting markers were pooled with two existing sets of SNPs called from the original 454 assembly using NEWBLER and SWAP454. Finally, we explored the extent to which SNPs discovered using these four methods overlapped and predicted the corresponding validation outcomes for both Illumina Infinium iSelect HD and Affymetrix Axiom arrays. Collating markers across all discovery methods resulted in a global list of 34,718 SNPs. However, concordance between the methods was surprisingly poor, with only 51.0 % of SNPs being discovered by more than one method and 13.5 % being called from both the 454 and Illumina datasets. Using a predictive modeling approach, we could also show that SNPs called from the Illumina data were on average more likely to successfully validate, as were SNPs called by more than one method. Above and beyond this pattern, predicted validation outcomes were also consistently better for Affymetrix Axiom arrays. Our results suggest that focusing on SNPs called by more than one method could potentially improve validation outcomes. They also highlight possible differences between alternative genotyping technologies that could be

  11. Model uncertainty and probability

    International Nuclear Information System (INIS)

    Parry, G.W.

    1994-01-01

    This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example

  12. Retrocausality and conditional probability

    International Nuclear Information System (INIS)

    Stuart, C.I.J.M.

    1989-01-01

    Costa de Beauregard has proposed that physical causality be identified with conditional probability. The proposal is shown to be vulnerable on two accounts. The first, though mathematically trivial, seems to be decisive so far as the current formulation of the proposal is concerned. The second lies in a physical inconsistency which seems to have its source in a Copenhagenlike disavowal of realism in quantum mechanics. 6 refs. (Author)

  13. Probability via expectation

    CERN Document Server

    Whittle, Peter

    1992-01-01

    This book is a complete revision of the earlier work Probability which ap­ peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de­ manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...

  14. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  15. Regulation

    International Nuclear Information System (INIS)

    Ballereau, P.

    1999-01-01

    The different regulations relative to nuclear energy since the first of January 1999 are given here. Two points deserve to be noticed: the decree of the third august 1999 authorizing the national Agency for the radioactive waste management to install and exploit on the commune of Bures (Meuse) an underground laboratory destined to study the deep geological formations where could be stored the radioactive waste. The second point is about the uranium residues and the waste notion. The judgment of the administrative tribunal of Limoges ( 9. july 1998) forbidding the exploitation of a storage installation of depleted uranium considered as final waste and qualifying it as an industrial waste storage facility has been annulled bu the Court of Appeal. It stipulated that, according to the law number 75663 of the 15. july 1965, no criteria below can be applied to depleted uranium: production residue (possibility of an ulterior enrichment), abandonment of a personal property or simple intention to do it ( future use aimed in the authorization request made in the Prefecture). This judgment has devoted the primacy of the waste notion on this one of final waste. (N.C.)

  16. Large deviations and idempotent probability

    CERN Document Server

    Puhalskii, Anatolii

    2001-01-01

    In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...

  17. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  18. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  19. The exact probability distribution of the rank product statistics for replicated experiments.

    Science.gov (United States)

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  20. Probability mapping of contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Rautman, C.A.; Kaplan, P.G. [Sandia National Labs., Albuquerque, NM (United States); McGraw, M.A. [Univ. of California, Berkeley, CA (United States); Istok, J.D. [Oregon State Univ., Corvallis, OR (United States); Sigda, J.M. [New Mexico Inst. of Mining and Technology, Socorro, NM (United States)

    1994-04-01

    Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds).

  1. Probability mapping of contaminants

    International Nuclear Information System (INIS)

    Rautman, C.A.; Kaplan, P.G.; McGraw, M.A.; Istok, J.D.; Sigda, J.M.

    1994-01-01

    Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds)

  2. Probability of Grounding and Collision Events

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    1996-01-01

    To quantify the risks involved in ship traffic, rational criteria for collision and grounding accidents are developed. This implies that probabilities as well as inherent consequences can be analysed and assessed. The presnt paper outlines a method for evaluation of the probability of ship...

  3. Probability of Grounding and Collision Events

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    1996-01-01

    To quantify the risks involved in ship traffic, rational criteria for collision and grounding accidents have to be developed. This implies that probabilities as well as inherent consequences have to be analyzed and assessed.The present notes outline a method for evaluation of the probability...

  4. Probability Issues in without Replacement Sampling

    Science.gov (United States)

    Joarder, A. H.; Al-Sabah, W. S.

    2007-01-01

    Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…

  5. The probability of the false vacuum decay

    International Nuclear Information System (INIS)

    Kiselev, V.; Selivanov, K.

    1983-01-01

    The closed expession for the probability of the false vacuum decay in (1+1) dimensions is given. The probability of false vacuum decay is expessed as the product of exponential quasiclassical factor and a functional determinant of the given form. The method for calcutation of this determinant is developed and a complete answer for (1+1) dimensions is given

  6. A new hydraulic regulation method on district heating system with distributed variable-speed pumps

    International Nuclear Information System (INIS)

    Wang, Hai; Wang, Haiying; Zhu, Tong

    2017-01-01

    Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the

  7. Probability of causation approach

    International Nuclear Information System (INIS)

    Jose, D.E.

    1988-01-01

    Probability of causation (PC) is sometimes viewed as a great improvement by those persons who are not happy with the present rulings of courts in radiation cases. The author does not share that hope and expects that PC will not play a significant role in these issues for at least the next decade. If it is ever adopted in a legislative compensation scheme, it will be used in a way that is unlikely to please most scientists. Consequently, PC is a false hope for radiation scientists, and its best contribution may well lie in some of the spin-off effects, such as an influence on medical practice

  8. Generalized Probability Functions

    Directory of Open Access Journals (Sweden)

    Alexandre Souto Martinez

    2009-01-01

    Full Text Available From the integration of nonsymmetrical hyperboles, a one-parameter generalization of the logarithmic function is obtained. Inverting this function, one obtains the generalized exponential function. Motivated by the mathematical curiosity, we show that these generalized functions are suitable to generalize some probability density functions (pdfs. A very reliable rank distribution can be conveniently described by the generalized exponential function. Finally, we turn the attention to the generalization of one- and two-tail stretched exponential functions. We obtain, as particular cases, the generalized error function, the Zipf-Mandelbrot pdf, the generalized Gaussian and Laplace pdf. Their cumulative functions and moments were also obtained analytically.

  9. Probability in High Dimension

    Science.gov (United States)

    2014-06-30

    precisely the content of the following result. The price we pay is that the assumption that A is a packing in (F, k ·k1) is too weak to make this happen...Regularité des trajectoires des fonctions aléatoires gaussiennes. In: École d’Été de Probabilités de Saint- Flour , IV-1974, pp. 1–96. Lecture Notes in...Lectures on probability theory and statistics (Saint- Flour , 1994), Lecture Notes in Math., vol. 1648, pp. 165–294. Springer, Berlin (1996) 50. Ledoux

  10. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  11. Comparing coefficients of nested nonlinear probability models

    DEFF Research Database (Denmark)

    Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

    2011-01-01

    In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

  12. PRESBYOPIA OPTOMETRY METHOD BASED ON DIOPTER REGULATION AND CHARGE COUPLE DEVICE IMAGING TECHNOLOGY.

    Science.gov (United States)

    Zhao, Q; Wu, X X; Zhou, J; Wang, X; Liu, R F; Gao, J

    2015-01-01

    With the development of photoelectric technology and single-chip microcomputer technology, objective optometry, also known as automatic optometry, is becoming precise. This paper proposed a presbyopia optometry method based on diopter regulation and Charge Couple Device (CCD) imaging technology and, in the meantime, designed a light path that could measure the system. This method projects a test figure to the eye ground and then the reflected image from the eye ground is detected by CCD. The image is then automatically identified by computer and the far point and near point diopters are determined to calculate lens parameter. This is a fully automatic objective optometry method which eliminates subjective factors of the tested subject. Furthermore, it can acquire the lens parameter of presbyopia accurately and quickly and can be used to measure the lens parameter of hyperopia, myopia and astigmatism.

  13. Method of Measuring the Mismatch of Parasitic Capacitance in MEMS Accelerometer Based on Regulating Electrostatic Stiffness

    Directory of Open Access Journals (Sweden)

    Xianshan Dong

    2018-03-01

    Full Text Available For the MEMS capacitive accelerometer, parasitic capacitance is a serious problem. Its mismatch will deteriorate the performance of accelerometer. Obtaining the mismatch of the parasitic capacitance precisely is helpful for improving the performance of bias and scale. Currently, the method of measuring the mismatch is limited in the direct measuring using the instrument. This traditional method has low accuracy for it would lead in extra parasitic capacitive and have other problems. This paper presents a novel method based on the mechanism of a closed-loop accelerometer. The strongly linear relationship between the output of electric force and the square of pre-load voltage is obtained through theoretical derivation and validated by experiment. Based on this relationship, the mismatch of parasitic capacitance can be obtained precisely through regulating electrostatic stiffness without other equipment. The results can be applied in the design of decreasing the mismatch and electrical adjusting for eliminating the influence of the mismatch.

  14. Probability, statistics, and computational science.

    Science.gov (United States)

    Beerenwinkel, Niko; Siebourg, Juliane

    2012-01-01

    In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.

  15. 用于统计测试概率分布生成的自动搜索方法%Automated Search Method for Statistical Test Probability Distribution Generation

    Institute of Scientific and Technical Information of China (English)

    周晓莹; 高建华

    2013-01-01

    A strategy based on automated search for probability distribution construction is proposed, which comprises the design of representation format and evaluation function for the probability distribution. Combining with simulated annealing algorithm, an indicator is defined to formalize the automated search process based on the Markov model. Experimental results show that the method effectively improves the accuracy of the automated search, which can reduce the expense of statistical test by providing the statistical test with fairly efficient test data since it successfully finds the neat-optimal probability distribution within a certain time.%提出一种基于自动搜索的概率分布生成方法,设计对概率分布的表示形式与评估函数,同时结合模拟退火算法设计基于马尔可夫模型的自动搜索过程.实验结果表明,该方法能够有效地提高自动搜索的准确性,在一定时间内成功找到接近最优的概率分布,生成高效的测试数据,同时达到降低统计测试成本的目的.

  16. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  17. The probability factor in establishing causation

    International Nuclear Information System (INIS)

    Hebert, J.

    1988-01-01

    This paper discusses the possibilities and limitations of methods using the probability factor in establishing the causal link between bodily injury, whether immediate or delayed, and the nuclear incident presumed to have caused it (NEA) [fr

  18. Do natural methods for fertility regulation increase the risks of genetic errors?

    Science.gov (United States)

    Serra, A

    1981-09-01

    Genetic errors of many kinds are connected with the reproductive processes and are favored by a nunber of largely uncontrollable, endogenous, and/or exogenous factors. For a long time human beings have taken into their own hands the control of this process. The regulation of fertility is clearly a forceful request to any family, to any community, were it only to lower the level of the consequences of genetic errors. In connection with this request, and in the context of the Congress for the Family of Africa and Europe (Catholic University, January 1981), 1 question must still be raised and possibly answered. The question is: do or can the so called "natural methods" for the regulation of fertility increase the risks of genetic errors with their generally dramatic effects on families and on communities. It is important to try to give as far as possible a scientifically based answer to this question. Fr. Haring, a moral theologian, citing scientific evidence finds it shocking that the rhythm method, so strongly and recently endorsed again by Church authorities, should be classified among the means of "birth control" by way of spontaneous abortion or at least by spontaneous loss of a large number of zygotes which, due to the concrete application of the rhythm method, lack of necessary vitality for survival. He goes on to state that the scientific research provides overwhelming evidence that the rhythm method in its traditional form is responsible for a disproportionate waste of zygotes and a disproportionate frequency of spontaneous abortions and a defective childern. Professor Hilgers, a reproductive physiologist, takes on opposite view, maintaining that the hypotheses are arbitrary and the alarm false. The strongest evidence upon which Fr. Haring bases his moral principles about the use of the natural methods of fertility regulation is a paper by Guerrero and Rojos (1975). These authors examined, retrospectively, the success of 965 pregnancies which occurred in

  19. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  20. 基于概率模型的ATC系统冲突目标生成算法%Probability-Based Method of Generating the Conflict Trajectories for ATC System

    Institute of Scientific and Technical Information of China (English)

    苏志刚; 眭聪聪; 吴仁彪

    2011-01-01

    For testing the capability of short term conflict alerting of air traffic control system, two methods are usually used. The former is to set a higher threshold, use the real data testing whether the system can alert when distance between two flights gets lower than the threshold. However, this method is not reliable. The second method is simulating flights which will conflict and obtain their trajectory from calculating, and then send these data to ATC system to see its reaction. This method is usually too simple to test whether the system can pre-detect a conflict effectively. To solve these problems, a probabilistic approach is used in this paper to simulate air-crafts with given probability of conflicting. Firstly, we derived the conflict probability of turing flights from Prandaini' s method of conflict probability estimation for linear flight. Then using reverse derivation we got the motion parameters of two targets whose conflict probability was pre-setted. At last, we simulated this pair of targets' track and anlysised their conflict probability. The simulation results show that the targets' probability of conflict was in line with the previous assumption. The trajectories generated by this algorithm are more realistic then a more effective conclusion of ATC system' s capability of short term conflict alerting and pre-detecting will be provided.%通常用于测试空中交通管制(Air Traffic Control,ATC)自动化系统的飞行冲突告警功能的方法主要有放宽系统告警值和向系统输入模拟的飞行冲突目标的雷达数据.前一种方法存在不可靠性,第二种方法由于只产生简单的确定目标轨迹数据,因此只能简单地测试系统能否告警,无法对系统的飞行冲突预测能力作出评价.为了使用于测试系统的模拟雷达数据更符合实际飞行情况,并检测系统预测飞行冲突的技术水平,本文提出了一种基于飞行冲突概率模型的航迹模拟方法,通过对不同目标

  1. Follow-up: Prospective compound design using the ‘SAR Matrix’ method and matrix-derived conditional probabilities of activity [v1; ref status: indexed, http://f1000r.es/56v

    Directory of Open Access Journals (Sweden)

    Disha Gupta-Ostermann

    2015-03-01

    Full Text Available In a previous Method Article, we have presented the ‘Structure-Activity Relationship (SAR Matrix’ (SARM approach. The SARM methodology is designed to systematically extract structurally related compound series from screening or chemical optimization data and organize these series and associated SAR information in matrices reminiscent of R-group tables. SARM calculations also yield many virtual candidate compounds that form a “chemical space envelope” around related series. To further extend the SARM approach, different methods are developed to predict the activity of virtual compounds. In this follow-up contribution, we describe an activity prediction method that derives conditional probabilities of activity from SARMs and report representative results of first prospective applications of this approach.

  2. Follow-up: Prospective compound design using the ‘SAR Matrix’ method and matrix-derived conditional probabilities of activity [v2; ref status: indexed, http://f1000r.es/59v

    Directory of Open Access Journals (Sweden)

    Disha Gupta-Ostermann

    2015-04-01

    Full Text Available In a previous Method Article, we have presented the ‘Structure-Activity Relationship (SAR Matrix’ (SARM approach. The SARM methodology is designed to systematically extract structurally related compound series from screening or chemical optimization data and organize these series and associated SAR information in matrices reminiscent of R-group tables. SARM calculations also yield many virtual candidate compounds that form a “chemical space envelope” around related series. To further extend the SARM approach, different methods are developed to predict the activity of virtual compounds. In this follow-up contribution, we describe an activity prediction method that derives conditional probabilities of activity from SARMs and report representative results of first prospective applications of this approach.

  3. Self-regulation method: psychological, physiological and clinical considerations. An overview.

    Science.gov (United States)

    Ikemi, A; Tomita, S; Kuroda, M; Hayashida, Y; Ikemi, Y

    1986-01-01

    Body-oriented therapies as relaxation training and certain forms of meditation are gaining popularity in the treatment and prevention of psychosomatic disorders. In this paper, a new method of self-control called self-regulation method (SRM), derived from autogenic training and Zen meditation, is presented. The technique of SRM is introduced. Secondly, physiological studies on SRM using skin temperature, galvanic skin response, and cortical evoked potentials are presented. Thirdly, the results of psychological tests conducted on SRM are presented. These psycho-physiological studies suggest that SRM may elicit a state of 'relaxed alertness'. Fourthly, clinical applications of SRM are discussed, and 3 cases are presented. Finally, SRM is discussed in relation to the psychology and physiology of 'relaxed alertness'.

  4. Stress influence on autonomous regulation of hearth, functions and radionucleodic methods in cardiological diagnostics

    International Nuclear Information System (INIS)

    Lacko, A.; Komarek, K.

    2007-01-01

    The study deals with to stress related problem and its psychological response in human body, such as influence of stress in the autonomous regulation of heart activity. The study has used the Stroop's test in order to determine the stress level. After that the spectral analysis of the heart rate variability was carried out in order to specify the impact of the stress on the regulation of the influence of the autonomous nervous system in the relation to the heart activity. Obtained results were compared with selected indicators of used psychodiagnostic methods (Stroop's test). Cardio vascular diseases represent a serious problem which is trying to be resolved by health care professionals nevertheless it should also be a concern of each individual as well as the whole society. This disease continually affects younger age categories. From the medical point of view the ambition of early diagnosis with consequent therapy should influence this adverse trend. Diagnosis of cardiovascular diseases by the nuclear medicine method has a substantial place. These particular examinations represent about 40 % of performed examinations in units of nuclear medicine. This very fact has glanced off in conception of nuclear medicine by establishment of a new subdivision of 'nuclear cardiology'. (authors)

  5. Determining probabilities of geologic events and processes

    International Nuclear Information System (INIS)

    Hunter, R.L.; Mann, C.J.; Cranwell, R.M.

    1985-01-01

    The Environmental Protection Agency has recently published a probabilistic standard for releases of high-level radioactive waste from a mined geologic repository. The standard sets limits for contaminant releases with more than one chance in 100 of occurring within 10,000 years, and less strict limits for releases of lower probability. The standard offers no methods for determining probabilities of geologic events and processes, and no consensus exists in the waste-management community on how to do this. Sandia National Laboratories is developing a general method for determining probabilities of a given set of geologic events and processes. In addition, we will develop a repeatable method for dealing with events and processes whose probability cannot be determined. 22 refs., 4 figs

  6. Statistical Methods for Solar Flare Probability Forecasting.

    Science.gov (United States)

    1980-09-01

    TAu C a -. 66M7. stNIr’ICANCe .0000 -SOMEOSIS 0 IASYMEMTIC- a -. i55 MITM fLARPE- -D"NOroMT, *-.255S WITH APPLONG OCSEOE4T. 8OLa sSES 0 (S "MERrCTAX 0...8217 q~ - - -WS S0MEOSNSO 0 ASYHNETP1CI * 291312UI HLARE4 OPENS(N, - * 49645 MITM *Ec$POT OWITN E f. ONIRSVS 0 ESYHNETRICI a .36652 0I"El of HISSING ONSERVATIONS t 19 76 ILMEI

  7. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    Science.gov (United States)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  8. Review of criteria for the selection of probability distributions for wind speed data and introduction of the moment and L-moment ratio diagram methods, with a case study

    International Nuclear Information System (INIS)

    Ouarda, T.B.M.J.; Charron, C.; Chebana, F.

    2016-01-01

    Highlights: • Review of criteria used to select probability distributions to model wind speed data. • Classical and L-moment ratio diagrams are applied to wind speed data. • The diagrams allow to select the best distribution to model each wind speed sample. • The goodness-of-fit statistics are more consistent with the L-moment ratio diagram. - Abstract: This paper reviews the different criteria used in the field of wind energy to compare the goodness-of-fit of candidate probability density functions (pdfs) to wind speed records, and discusses their advantages and disadvantages. The moment ratio and L-moment ratio diagram methods are also proposed as alternative methods for the choice of the pdfs. These two methods have the advantage of allowing an easy comparison of the fit of several pdfs for several time series (stations) on a single diagram. Plotting the position of a given wind speed data set in these diagrams is instantaneous and provides more information than a goodness-of-fit criterion since it provides knowledge about such characteristics as the skewness and kurtosis of the station data set. In this paper, it is proposed to study the applicability of these two methods for the selection of pdfs for wind speed data. Both types of diagrams are used to assess the fit of the pdfs for wind speed series in the United Arab Emirates. The analysis of the moment ratio diagrams reveals that the Kappa, Log-Pearson type III and Generalized Gamma are the distributions that fit best all wind speed series. The Weibull represents the best distribution among those with only one shape parameter. Results obtained with the diagrams are compared with those obtained with goodness-of-fit statistics and a good agreement is observed especially in the case of the L-moment ratio diagram. It is concluded that these diagrams can represent a simple and efficient approach to be used as complementary method to goodness-of-fit criteria.

  9. A new fluorescence-based method identifies protein phosphatases regulating lipid droplet metabolism.

    Directory of Open Access Journals (Sweden)

    Bruno L Bozaquel-Morais

    Full Text Available In virtually every cell, neutral lipids are stored in cytoplasmic structures called lipid droplets (LDs and also referred to as lipid bodies or lipid particles. We developed a rapid high-throughput assay based on the recovery of quenched BODIPY-fluorescence that allows to quantify lipid droplets. The method was validated by monitoring lipid droplet turnover during growth of a yeast culture and by screening a group of strains deleted in genes known to be involved in lipid metabolism. In both tests, the fluorimetric assay showed high sensitivity and good agreement with previously reported data using microscopy. We used this method for high-throughput identification of protein phosphatases involved in lipid droplet metabolism. From 65 yeast knockout strains encoding protein phosphatases and its regulatory subunits, 13 strains revealed to have abnormal levels of lipid droplets, 10 of them having high lipid droplet content. Strains deleted for type I protein phosphatases and related regulators (ppz2, gac1, bni4, type 2A phosphatase and its related regulator (pph21 and sap185, type 2C protein phosphatases (ptc1, ptc4, ptc7 and dual phosphatases (pps1, msg5 were catalogued as high-lipid droplet content strains. Only reg1, a targeting subunit of the type 1 phosphatase Glc7p, and members of the nutrient-sensitive TOR pathway (sit4 and the regulatory subunit sap190 were catalogued as low-lipid droplet content strains, which were studied further. We show that Snf1, the homologue of the mammalian AMP-activated kinase, is constitutively phosphorylated (hyperactive in sit4 and sap190 strains leading to a reduction of acetyl-CoA carboxylase activity. In conclusion, our fast and highly sensitive method permitted us to catalogue protein phosphatases involved in the regulation of LD metabolism and present evidence indicating that the TOR pathway and the SNF1/AMPK pathway are connected through the Sit4p-Sap190p pair in the control of lipid droplet biogenesis.

  10. Collective probabilities algorithm for surface hopping calculations

    International Nuclear Information System (INIS)

    Bastida, Adolfo; Cruz, Carlos; Zuniga, Jose; Requena, Alberto

    2003-01-01

    General equations that transition probabilities of the hopping algorithms in surface hopping calculations must obey to assure the equality between the average quantum and classical populations are derived. These equations are solved for two particular cases. In the first it is assumed that probabilities are the same for all trajectories and that the number of hops is kept to a minimum. These assumptions specify the collective probabilities (CP) algorithm, for which the transition probabilities depend on the average populations for all trajectories. In the second case, the probabilities for each trajectory are supposed to be completely independent of the results from the other trajectories. There is, then, a unique solution of the general equations assuring that the transition probabilities are equal to the quantum population of the target state, which is referred to as the independent probabilities (IP) algorithm. The fewest switches (FS) algorithm developed by Tully is accordingly understood as an approximate hopping algorithm which takes elements from the accurate CP and IP solutions. A numerical test of all these hopping algorithms is carried out for a one-dimensional two-state problem with two avoiding crossings which shows the accuracy and computational efficiency of the collective probabilities algorithm proposed, the limitations of the FS algorithm and the similarity between the results offered by the IP algorithm and those obtained with the Ehrenfest method

  11. A Tale of Two Probabilities

    Science.gov (United States)

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  12. Selective Regulation of Oocyte Meiotic Events Enhances Progress in Fertility Preservation Methods

    Directory of Open Access Journals (Sweden)

    Onder Celik

    2015-01-01

    persistence of the GV oocyte, which reduces the number of good quality eggs. Selective regulation of somatic cell signals and oocyte meiotic events enhance progress in fertility preservation methods, which may give us the opportunity to prevent follicle loss in prematurely aging women and young women with cancer are undergoing chemoradiotherapy.

  13. Release monitoring and environmental surveillance of Cea centers. Assessment and regulation and method 1999

    International Nuclear Information System (INIS)

    2000-01-01

    The quality of the natural environment around the centers of the Commissariat a l Energie Atomique is an important point of its safety policy. The environmental protection is based on the control of risks coming from research and development activities of its installations. It aims to reduce as low as possible, the impact of its activities on man and his environment. This publication develops the sampling and measurement methods that are made on effluents and in environment, according to the radionuclides characteristics, that are present. It gives also the regulation that applied to the effluents monitoring. The results of radioactive effluents releases (liquid and gaseous) and the surveillance of environment around cea centers is given in the 'Bilan 1999' publication. An analysis of these results on the 1995-1999 period allows to follow their evolution. (N.C.)

  14. Introduction to probability with R

    CERN Document Server

    Baclawski, Kenneth

    2008-01-01

    FOREWORD PREFACE Sets, Events, and Probability The Algebra of Sets The Bernoulli Sample Space The Algebra of Multisets The Concept of Probability Properties of Probability Measures Independent Events The Bernoulli Process The R Language Finite Processes The Basic Models Counting Rules Computing Factorials The Second Rule of Counting Computing Probabilities Discrete Random Variables The Bernoulli Process: Tossing a Coin The Bernoulli Process: Random Walk Independence and Joint Distributions Expectations The Inclusion-Exclusion Principle General Random Variable

  15. Computation of the Complex Probability Function

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-22

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the nth degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  16. Green, Brown, and probability

    CERN Document Server

    Chung, Kai Lai

    1995-01-01

    This volume shows modern probabilistic methods in action: Brownian Motion Process as applied to the electrical phenomena investigated by Green et al., beginning with the Newton-Coulomb potential and ending with solutions by first and last exits of Brownian paths from conductors.

  17. Encounter Probability of Individual Wave Height

    DEFF Research Database (Denmark)

    Liu, Z.; Burcharth, H. F.

    1998-01-01

    wave height corresponding to a certain exceedence probability within a structure lifetime (encounter probability), based on the statistical analysis of long-term extreme significant wave height. Then the design individual wave height is calculated as the expected maximum individual wave height...... associated with the design significant wave height, with the assumption that the individual wave heights follow the Rayleigh distribution. However, the exceedence probability of such a design individual wave height within the structure lifetime is unknown. The paper presents a method for the determination...... of the design individual wave height corresponding to an exceedence probability within the structure lifetime, given the long-term extreme significant wave height. The method can also be applied for estimation of the number of relatively large waves for fatigue analysis of constructions....

  18. Probability and statistics with integrated software routines

    CERN Document Server

    Deep, Ronald

    2005-01-01

    Probability & Statistics with Integrated Software Routines is a calculus-based treatment of probability concurrent with and integrated with statistics through interactive, tailored software applications designed to enhance the phenomena of probability and statistics. The software programs make the book unique.The book comes with a CD containing the interactive software leading to the Statistical Genie. The student can issue commands repeatedly while making parameter changes to observe the effects. Computer programming is an excellent skill for problem solvers, involving design, prototyping, data gathering, testing, redesign, validating, etc, all wrapped up in the scientific method.See also: CD to accompany Probability and Stats with Integrated Software Routines (0123694698)* Incorporates more than 1,000 engaging problems with answers* Includes more than 300 solved examples* Uses varied problem solving methods

  19. A first course in probability

    CERN Document Server

    Ross, Sheldon

    2014-01-01

    A First Course in Probability, Ninth Edition, features clear and intuitive explanations of the mathematics of probability theory, outstanding problem sets, and a variety of diverse examples and applications. This book is ideal for an upper-level undergraduate or graduate level introduction to probability for math, science, engineering and business students. It assumes a background in elementary calculus.

  20. Orthogonal Algorithm of Logic Probability and Syndrome-Testable Analysis

    Institute of Scientific and Technical Information of China (English)

    1990-01-01

    A new method,orthogonal algoritm,is presented to compute the logic probabilities(i.e.signal probabilities)accurately,The transfer properties of logic probabilities are studied first,which are useful for the calculation of logic probability of the circuit with random independent inputs.Then the orthogonal algoritm is described to compute the logic probability of Boolean function realized by a combinational circuit.This algorithm can make Boolean function “ORTHOGONAL”so that the logic probabilities can be easily calculated by summing up the logic probabilities of all orthogonal terms of the Booleam function.

  1. Application of the Galerkin's method to the solution of the one-dimensional integral transport equation: generalized collision probabilities taken in account the flux gradient and the linearly anisotropic scattering

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-04-01

    For the one-dimensional geometries, the transport equation with linearly anisotropic scattering can be reduced to a single integral equation; this is a singular-kernel FREDHOLM equation of the second kind. When applying a conventional projective method that of GALERKIN, to the solution of this equation the well-known collision probability algorithm is obtained. Piecewise polynomial expansions are used to represent the flux. In the ANILINE code, the flux is supposed to be linear in plane geometry and parabolic in both cylindrical and spherical geometries. An integral relationship was found between the one-dimensional isotropic and anisotropic kernels; this allows to reduce the new matrix elements (issuing from the anisotropic kernel) to classic collision probabilities of the isotropic scattering equation. For cylindrical and spherical geometries used an approximate representation of the current was used to avoid an additional numerical integration. Reflective boundary conditions were considered; in plane geometry the reflection is supposed specular, for the other geometries the isotropic reflection hypothesis has been adopted. Further, the ANILINE code enables to deal with an incoming isotropic current. Numerous checks were performed in monokinetic theory. Critical radii and albedos were calculated for homogeneous slabs, cylinders and spheres. For heterogeneous media, the thermal utilization factor obtained by this method was compared with the theoretical result based upon a formula by BENOIST. Finally, ANILINE was incorporated into the multigroup APOLLO code, which enabled to analyse the MINERVA experimental reactor in transport theory with 99 groups. The ANILINE method is particularly suited to the treatment of strongly anisotropic media with considerable flux gradients. It is also well adapted to the calculation of reflectors, and in general, to the exact analysis of anisotropic effects in large-sized media [fr

  2. Escape and transmission probabilities in cylindrical geometry

    International Nuclear Information System (INIS)

    Bjerke, M.A.

    1980-01-01

    An improved technique for the generation of escape and transmission probabilities in cylindrical geometry was applied to the existing resonance cross section processing code ROLAIDS. The algorithm of Hwang and Toppel, [ANL-FRA-TM-118] (with modifications) was employed. The probabilities generated were found to be as accurate as those given by the method previously applied in ROLAIDS, while requiring much less computer core storage and CPU time

  3. Assessment of manual material handling using Iranian MMH regulations and comparison with NIOSH equation and MAC method in one of the metal casting industries in Tehran, 2011

    Directory of Open Access Journals (Sweden)

    2013-08-01

    Result: Results showed an acceptable agreement between Iranian regulation and MAC method as well as Iranian regulation and NIOSH equation. However, no acceptable agreement was observed between MAC method and NIOSH equation.

  4. A brief introduction to probability.

    Science.gov (United States)

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  5. On the Possibility of Assigning Probabilities to Singular Cases, or: Probability Is Subjective Too!

    Directory of Open Access Journals (Sweden)

    Mark R. Crovelli

    2009-06-01

    Full Text Available Both Ludwig von Mises and Richard von Mises claimed that numerical probability could not be legitimately applied to singular cases. This paper challenges this aspect of the von Mises brothers’ theory of probability. It is argued that their denial that numerical probability could be applied to singular cases was based solely upon Richard von Mises’ exceptionally restrictive definition of probability. This paper challenges Richard von Mises’ definition of probability by arguing that the definition of probability necessarily depends upon whether the world is governed by time-invariant causal laws. It is argued that if the world is governed by time-invariant causal laws, a subjective definition of probability must be adopted. It is further argued that both the nature of human action and the relative frequency method for calculating numerical probabilities both presuppose that the world is indeed governed by time-invariant causal laws. It is finally argued that the subjective definition of probability undercuts the von Mises claim that numerical probability cannot legitimately be applied to singular, non-replicable cases.

  6. Quantum probability for probabilists

    CERN Document Server

    Meyer, Paul-André

    1993-01-01

    In recent years, the classical theory of stochastic integration and stochastic differential equations has been extended to a non-commutative set-up to develop models for quantum noises. The author, a specialist of classical stochastic calculus and martingale theory, tries to provide anintroduction to this rapidly expanding field in a way which should be accessible to probabilists familiar with the Ito integral. It can also, on the other hand, provide a means of access to the methods of stochastic calculus for physicists familiar with Fock space analysis.

  7. Inverse probability weighting and doubly robust methods in correcting the effects of non-response in the reimbursed medication and self-reported turnout estimates in the ATH survey.

    Science.gov (United States)

    Härkänen, Tommi; Kaikkonen, Risto; Virtala, Esa; Koskinen, Seppo

    2014-11-06

    To assess the nonresponse rates in a questionnaire survey with respect to administrative register data, and to correct the bias statistically. The Finnish Regional Health and Well-being Study (ATH) in 2010 was based on a national sample and several regional samples. Missing data analysis was based on socio-demographic register data covering the whole sample. Inverse probability weighting (IPW) and doubly robust (DR) methods were estimated using the logistic regression model, which was selected using the Bayesian information criteria. The crude, weighted and true self-reported turnout in the 2008 municipal election and prevalences of entitlements to specially reimbursed medication, and the crude and weighted body mass index (BMI) means were compared. The IPW method appeared to remove a relatively large proportion of the bias compared to the crude prevalence estimates of the turnout and the entitlements to specially reimbursed medication. Several demographic factors were shown to be associated with missing data, but few interactions were found. Our results suggest that the IPW method can improve the accuracy of results of a population survey, and the model selection provides insight into the structure of missing data. However, health-related missing data mechanisms are beyond the scope of statistical methods, which mainly rely on socio-demographic information to correct the results.

  8. roposal on the Method of Regulating Ascending Kidney Water and Descending Heart Fire -through pharmacopuncture technique-

    Directory of Open Access Journals (Sweden)

    Ki Rok, Kwon

    2007-12-01

    Full Text Available Objectives : The purpose of this study is aimed at diagnosing and suggesting treatment plans for commonly seen clinical manifestation of heat symptom in the upper body and coldness in the lower body, also known as hot above, cold below syndrome. Methods : Various reasons attribute to the presence of hot above, cold below syndrome, but mainly contributed by blockage of normal Qi flow by abnormality of heart-kidney root, spleen-stomach axis, and liver-lung axis. Diagnosing these abnormalities and timely alleviation to the healthy state is presented in the study. Results : 1For heat in the upper body, Huang Lian Jie Du Tang(黃連解毒湯, CF, or JsD pharmacopuctures are injected on GB21, GB20. Qi stagnation in the thoracic area is treated with BUM injection on CV17. For impairment of transportation and transformation in the middle energizer, BUM pharmacopuncture is injected on CV12. Coldness in the lower energizer was relieved by bee venom or Sweet BV(Bee Venom free from enzymes on CV6. Conclusion : Above proposed methods of regulating water-fire were effective in treating hot above, cold below syndrome in clinical manifestations. But once the symptom subsides, treatment focused on eliminating innate cause should be rendered to achieve more successful results.

  9. Failure probability analysis of optical grid

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  10. Application of accelerated evaluation method of alteration temperature and constant dose rate irradiation on bipolar linear regulator LM317

    International Nuclear Information System (INIS)

    Deng Wei; Wu Xue; Wang Xin; Zhang Jinxin; Zhang Xiaofu; Zheng Qiwen; Ma Wuying; Lu Wu; Guo Qi; He Chengfa

    2014-01-01

    With different irradiation methods including high dose rate irradiation, low dose rate irradiation, alteration temperature and constant dose rate irradiation, and US military standard constant high temperature and constant dose rate irradiation, the ionizing radiation responses of bipolar linear regulator LM317 from three different companies were investigated under the operating and zero biases. The results show that compared with constant high temperature and constant dose rate irradiation method, the alteration temperature and constant dose rate irradiation method can not only very rapidly and accurately evaluate the dose rate effect of three bipolar linear regulators, but also well simulate the damage of low dose rate irradiation. Experiment results make the alteration temperature and constant dose rate irradiation method successfully apply to bipolar linear regulator. (authors)

  11. Structural correlation method for model reduction and practical estimation of patient specific parameters illustrated on heart rate regulation

    DEFF Research Database (Denmark)

    Ottesen, Johnny T.; Mehlsen, Jesper; Olufsen, Mette

    2014-01-01

    We consider the inverse and patient specific problem of short term (seconds to minutes) heart rate regulation specified by a system of nonlinear ODEs and corresponding data. We show how a recent method termed the structural correlation method (SCM) can be used for model reduction and for obtaining...... a set of practically identifiable parameters. The structural correlation method includes two steps: sensitivity and correlation analysis. When combined with an optimization step, it is possible to estimate model parameters, enabling the model to fit dynamics observed in data. This method is illustrated...... in detail on a model predicting baroreflex regulation of heart rate and applied to analysis of data from a rat and healthy humans. Numerous mathematical models have been proposed for prediction of baroreflex regulation of heart rate, yet most of these have been designed to provide qualitative predictions...

  12. Propensity, Probability, and Quantum Theory

    Science.gov (United States)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  13. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  14. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....

  15. Prediction and probability in sciences

    International Nuclear Information System (INIS)

    Klein, E.; Sacquin, Y.

    1998-01-01

    This book reports the 7 presentations made at the third meeting 'physics and fundamental questions' whose theme was probability and prediction. The concept of probability that was invented to apprehend random phenomena has become an important branch of mathematics and its application range spreads from radioactivity to species evolution via cosmology or the management of very weak risks. The notion of probability is the basis of quantum mechanics and then is bound to the very nature of matter. The 7 topics are: - radioactivity and probability, - statistical and quantum fluctuations, - quantum mechanics as a generalized probability theory, - probability and the irrational efficiency of mathematics, - can we foresee the future of the universe?, - chance, eventuality and necessity in biology, - how to manage weak risks? (A.C.)

  16. Poisson Processes in Free Probability

    OpenAIRE

    An, Guimei; Gao, Mingchu

    2015-01-01

    We prove a multidimensional Poisson limit theorem in free probability, and define joint free Poisson distributions in a non-commutative probability space. We define (compound) free Poisson process explicitly, similar to the definitions of (compound) Poisson processes in classical probability. We proved that the sum of finitely many freely independent compound free Poisson processes is a compound free Poisson processes. We give a step by step procedure for constructing a (compound) free Poisso...

  17. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    Science.gov (United States)

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  18. Optimal Operation of Distribution Electronic Power Transformer Using Linear Quadratic Regulator Method

    Directory of Open Access Journals (Sweden)

    Mohammad Hosein Rezaei

    2011-10-01

    Full Text Available Transformers perform many functions such as voltage transformation, isolation and noise decoupling. They are indispensable components in electric power distribution system. However, at low frequencies (50 Hz, they are one of the heaviest and the most expensive equipment in an electrical distribution system. Nowadays, electronic power transformers are used instead of conventional power transformers that do voltage transformation and power delivery in power system by power electronic converter. In this paper, the structure of distribution electronic power transformer (DEPT are analized and then paid attention on the design of a linear-quadratic-regulator (LQR with integral action to improve dynamic performance of DEPT with voltage unbalance, voltage sags, voltage harmonics and voltage flicker. The presentation control strategy is simulated by MATLAB/SIMULINK. In addition, the results that are in terms of dc-link reference voltage, input and output voltages clearly show that a better dynamic performance can be achieved by using the LQR method when compared to other techniques.

  19. THE REGULATION OF METHODICAL IMPLEMENTATION BY EQUITY ACCOUNTING ON ENTERPRISES WITH FOREIGN INVESTMENT

    Directory of Open Access Journals (Sweden)

    Iurii Iakymov

    2016-11-01

    Full Text Available Purpose is to specify on the accounting methodic of transactions with equity and based on it’s ways of their effektive and resultative improvement in the enterprises with foreign investments. Methodology: In the context of such a scientific research the economic substance and methodical support of the equity accounting in enterprises with foreign investment comparison methods were used: research, synthesis, system approach, mathematical methods, formalization, induction, deduction and other methods. The scientific article is compiled on the basis of research results the main provisions of the legal regulation of these processes, the analysis of the literature of scientists and experts, that investigate this perspective, and other official sources from the Internet. Results. This article is devoted to the economic essence and peculiarities of the accounting equity in the context of accounts, the methodology for formation of equity, recognition procedures and the equity in enterprises with foreign investment. Also, the methodical approach of equity accounting in enterprises with foreign investment was analyzed by the author. As a result of research and detailed testing of transactions with equity for enterprises with foreign investments formed the results and recommendations: - specification of accounting method the transactions of equity based on the scientific research of it’s economic nature and characteristics of accounts, capital formation techniques, procedures, recognition and measurement of equity on the basis of comparative characteristics the international experience; - in order to display the mapping technique in the accounts of transactions with equity, is considered the procedure of object accounting in the program 1C and SAP, which based on a comparison of it’s benchmarks; - proposed the model of comparative accounting automation through the use of accounting software 1C and SAP, confirmed the need for a gradual transition to

  20. Dynamic probability evaluation of safety levels of earth-rockfill dams using Bayesian approach

    Directory of Open Access Journals (Sweden)

    Zi-wu Fan

    2009-06-01

    Full Text Available In order to accurately predict and control the aging process of dams, new information should be collected continuously to renew the quantitative evaluation of dam safety levels. Owing to the complex structural characteristics of dams, it is quite difficult to predict the time-varying factors affecting their safety levels. It is not feasible to employ dynamic reliability indices to evaluate the actual safety levels of dams. Based on the relevant regulations for dam safety classification in China, a dynamic probability description of dam safety levels was developed. Using the Bayesian approach and effective information mining, as well as real-time information, this study achieved more rational evaluation and prediction of dam safety levels. With the Bayesian expression of discrete stochastic variables, the a priori probabilities of the dam safety levels determined by experts were combined with the likelihood probability of the real-time check information, and the probability information for the evaluation of dam safety levels was renewed. The probability index was then applied to dam rehabilitation decision-making. This method helps reduce the difficulty and uncertainty of the evaluation of dam safety levels and complies with the current safe decision-making regulations for dams in China. It also enhances the application of current risk analysis methods for dam safety levels.

  1. Evaluation of probability and hazard in nuclear energy

    International Nuclear Information System (INIS)

    Novikov, V.Ya.; Romanov, N.L.

    1979-01-01

    Various methods of evaluation of accident probability on NPP are proposed because of NPP security statistic evaluation unreliability. The conception of subjective probability for quantitative analysis of security and hazard are described. Intrepretation of probability as real faith of an expert is assumed as a basis of the conception. It is suggested to study the event uncertainty in the framework of subjective probability theory which not only permits but demands to take into account expert opinions when evaluating the probability. These subjective expert evaluations effect to a certain extent the calculation of the usual mathematical event probability. The above technique is advantageous to use for consideration of a separate experiment or random event

  2. Probability inequalities for decomposition integrals

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2017-01-01

    Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf

  3. Expected utility with lower probabilities

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1994-01-01

    An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...

  4. Safety evaluations required in the safety regulations for Monju and the validity confirmation of safety evaluation methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    The purposes of this study are to perform the safety evaluations of the fast breeder reactor 'Monju' and to confirm the validity of the safety evaluation methods. In JFY 2012, the following results were obtained. As for the development of safety evaluation methods needed in the safety examination achieved for the reactor establishment permission, development of the analysis codes, such as a core damage analysis code, were carried out according to the plan. As for the development of the safety evaluation method needed for the risk informed safety regulation, the quantification technique of the event tree using the Continuous Markov chain Monte Carlo method (CMMC method) were studied. (author)

  5. Invariant probabilities of transition functions

    CERN Document Server

    Zaharopol, Radu

    2014-01-01

    The structure of the set of all the invariant probabilities and the structure of various types of individual invariant probabilities of a transition function are two topics of significant interest in the theory of transition functions, and are studied in this book. The results obtained are useful in ergodic theory and the theory of dynamical systems, which, in turn, can be applied in various other areas (like number theory). They are illustrated using transition functions defined by flows, semiflows, and one-parameter convolution semigroups of probability measures. In this book, all results on transition probabilities that have been published by the author between 2004 and 2008 are extended to transition functions. The proofs of the results obtained are new. For transition functions that satisfy very general conditions the book describes an ergodic decomposition that provides relevant information on the structure of the corresponding set of invariant probabilities. Ergodic decomposition means a splitting of t...

  6. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  7. Linear positivity and virtual probability

    International Nuclear Information System (INIS)

    Hartle, James B.

    2004-01-01

    We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics

  8. Limitation of duty hour regulations for pediatric resident wellness: A mixed methods study in Japan.

    Science.gov (United States)

    Nomura, Osamu; Mishina, Hiroki; Kobayashi, Yoshinori; Ishiguro, Akira; Sakai, Hirokazu; Kato, Hiroyuki

    2016-09-01

    Duty hour regulations have been placed in residency programs to address mental health concerns and to improve wellness. Here, we elucidate the prevalence of depressive symptoms after implementing an overnight call shift system and the factors associated with burnout or depression among residents.A sequential exploratory mixed methods study was conducted in a tertiary care pediatric and perinatal hospital in Tokyo, Japan. A total of 41 pediatric residents participated in the cross-sectional survey. We determined and compared the prevalence of depressive symptoms and the number of actual working hours before and after implementing the shift system. A follow-up focus-group interview with 4 residents was conducted to explore the factors that may trigger or prevent depression and burnout.Mean working hours significantly decreased from 75.2 hours to 64.9 hours per week. Prevalence of depressive symptoms remained similar before and after implementation of the shift system. Emotional exhaustion and depersonalization from the burnout scale were markedly associated with depression. High workload, stress intolerance, interpersonal difficulties, and generation gaps regarding work-life balance could cause burnout. Stress tolerance, workload monitoring and balancing, appropriate supervision, and peer support could prevent burnout.Although the overnight call shift system was effective in reducing working hours, its effectiveness in managing mental health issues among pediatric residents remains unclear. Resident wellness programs represent an additional strategy and they should be aimed at fostering peer support and improvement of resident-faculty interactions. Such an approach could be beneficial to the relationship between physicians of different generations with conflicting belief structures.

  9. microRNA-365, down-regulated in colon cancer, inhibits cell cycle progression and promotes apoptosis of colon cancer cells by probably targeting Cyclin D1 and Bcl-2.

    Science.gov (United States)

    Nie, Jing; Liu, Lin; Zheng, Wei; Chen, Lin; Wu, Xin; Xu, Yingxin; Du, Xiaohui; Han, Weidong

    2012-01-01

    Deregulated microRNAs participate in carcinogenesis and cancer progression, but their roles in cancer development remain unclear. In this study, miR-365 expression was found to be downregulated in human colon cancer tissues as compared with that in matched non-neoplastic mucosa tissues, and its downregulation was correlated with cancer progression and poor survival in colon cancer patients. Functional studies revealed that restoration of miR-365 expression inhibited cell cycle progression, promoted 5-fluorouracil-induced apoptosis and repressed tumorigenicity in colon cancer cell lines. Furthermore, bioinformatic prediction and experimental validation were used to identify miR-365 target genes and indicated that the antitumor effects of miR-365 were probably mediated by its targeting and repression of Cyclin D1 and Bcl-2 expression, thus inhibiting cell cycle progression and promoting apoptosis. These results suggest that downregulation of miR-365 in colon cancer may have potential applications in prognosis prediction and gene therapy in colon cancer patients.

  10. Probable Inference and Quantum Mechanics

    International Nuclear Information System (INIS)

    Grandy, W. T. Jr.

    2009-01-01

    In its current very successful interpretation the quantum theory is fundamentally statistical in nature. Although commonly viewed as a probability amplitude whose (complex) square is a probability, the wavefunction or state vector continues to defy consensus as to its exact meaning, primarily because it is not a physical observable. Rather than approach this problem directly, it is suggested that it is first necessary to clarify the precise role of probability theory in quantum mechanics, either as applied to, or as an intrinsic part of the quantum theory. When all is said and done the unsurprising conclusion is that quantum mechanics does not constitute a logic and probability unto itself, but adheres to the long-established rules of classical probability theory while providing a means within itself for calculating the relevant probabilities. In addition, the wavefunction is seen to be a description of the quantum state assigned by an observer based on definite information, such that the same state must be assigned by any other observer based on the same information, in much the same way that probabilities are assigned.

  11. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  12. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  13. Method of power self-regulation of CFBR-II reactor based on DSP

    International Nuclear Information System (INIS)

    Bai Zhongxiong; Zhou Wenxiang

    2007-01-01

    To the control system of Power Self-regulation of CFBR-II Reactor, a new digital control scheme based on DSP has been brought forward. The TMS320F2812 DSP chip is adopted as the core controller to realize Power self-regulation of CFBR-II Reactor. In this paper, the successful program of DSP control system is introduced in both hardware and software technology in detail. (authors)

  14. Solution of the neutron transport equation by the collision probability method for 3D geometries; Resolution de l`equation du transport par les neutrons par la methode des probabilites de collision dans les geometries 3D

    Energy Technology Data Exchange (ETDEWEB)

    Oujidi, B

    1996-09-19

    The TDT code solves the multigroup transport equation by the interface-current method for unstructured 2D geometries. This works presents the extension of TDT to the treatment of 3D geometries obtained by axial displacement of unstructured 2D geometries. Three-dimensional trajectories are obtained by lifting the 2D trajectories. The code allows for the definition of macro-domains in the axial direction to be used in interface-current method. Specular and isotropic reflection or translations boundary conditions can be applied to the horizontal boundaries of the domain. Numerical studies have shown the need for longer trajectory cutoffs for trajectories intersecting horizontal boundaries. Numerical applications to the calculation of local power peaks are given in a second part for: the local destruction of a Pyrex absorbent, inter-assembly (U02-MOX) power distortion due to pellet collapsing at the top of the core. Calculations with 16 groups were performed by coupling TDT to the spectral code APOLLO2. One-group comparisons with the Monte Carlo code TRIMARAN2 are also given. (author) 30 refs.

  15. Probability with applications and R

    CERN Document Server

    Dobrow, Robert P

    2013-01-01

    An introduction to probability at the undergraduate level Chance and randomness are encountered on a daily basis. Authored by a highly qualified professor in the field, Probability: With Applications and R delves into the theories and applications essential to obtaining a thorough understanding of probability. With real-life examples and thoughtful exercises from fields as diverse as biology, computer science, cryptology, ecology, public health, and sports, the book is accessible for a variety of readers. The book's emphasis on simulation through the use of the popular R software language c

  16. A philosophical essay on probabilities

    CERN Document Server

    Laplace, Marquis de

    1996-01-01

    A classic of science, this famous essay by ""the Newton of France"" introduces lay readers to the concepts and uses of probability theory. It is of especial interest today as an application of mathematical techniques to problems in social and biological sciences.Generally recognized as the founder of the modern phase of probability theory, Laplace here applies the principles and general results of his theory ""to the most important questions of life, which are, in effect, for the most part, problems in probability."" Thus, without the use of higher mathematics, he demonstrates the application

  17. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  18. Logic, probability, and human reasoning.

    Science.gov (United States)

    Johnson-Laird, P N; Khemlani, Sangeet S; Goodwin, Geoffrey P

    2015-04-01

    This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn - not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  20. Introduction to probability and measure

    CERN Document Server

    Parthasarathy, K R

    2005-01-01

    According to a remark attributed to Mark Kac 'Probability Theory is a measure theory with a soul'. This book with its choice of proofs, remarks, examples and exercises has been prepared taking both these aesthetic and practical aspects into account.

  1. Joint probabilities and quantum cognition

    International Nuclear Information System (INIS)

    Acacio de Barros, J.

    2012-01-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  2. Joint probabilities and quantum cognition

    Energy Technology Data Exchange (ETDEWEB)

    Acacio de Barros, J. [Liberal Studies, 1600 Holloway Ave., San Francisco State University, San Francisco, CA 94132 (United States)

    2012-12-18

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  3. Default probabilities and default correlations

    OpenAIRE

    Erlenmaier, Ulrich; Gersbach, Hans

    2001-01-01

    Starting from the Merton framework for firm defaults, we provide the analytics and robustness of the relationship between default correlations. We show that loans with higher default probabilities will not only have higher variances but also higher correlations between loans. As a consequence, portfolio standard deviation can increase substantially when loan default probabilities rise. This result has two important implications. First, relative prices of loans with different default probabili...

  4. The Probabilities of Unique Events

    Science.gov (United States)

    2012-08-30

    Washington, DC USA Max Lotstein and Phil Johnson-Laird Department of Psychology Princeton University Princeton, NJ USA August 30th 2012...social justice and also participated in antinuclear demonstrations. The participants ranked the probability that Linda is a feminist bank teller as...retorted that such a flagrant violation of the probability calculus was a result of a psychological experiment that obscured the rationality of the

  5. Probability Matching, Fast and Slow

    OpenAIRE

    Koehler, Derek J.; James, Greta

    2014-01-01

    A prominent point of contention among researchers regarding the interpretation of probability-matching behavior is whether it represents a cognitively sophisticated, adaptive response to the inherent uncertainty of the tasks or settings in which it is observed, or whether instead it represents a fundamental shortcoming in the heuristics that support and guide human decision making. Put crudely, researchers disagree on whether probability matching is "smart" or "dumb." Here, we consider eviden...

  6. Applying novel technologies and methods to inform the ontology of self-regulation.

    Science.gov (United States)

    Eisenberg, Ian W; Bissett, Patrick G; Canning, Jessica R; Dallery, Jesse; Enkavi, A Zeynep; Whitfield-Gabrieli, Susan; Gonzalez, Oscar; Green, Alan I; Greene, Mary Ann; Kiernan, Michaela; Kim, Sunny Jung; Li, Jamie; Lowe, Michael R; Mazza, Gina L; Metcalf, Stephen A; Onken, Lisa; Parikh, Sadev S; Peters, Ellen; Prochaska, Judith J; Scherer, Emily A; Stoeckel, Luke E; Valente, Matthew J; Wu, Jialing; Xie, Haiyi; MacKinnon, David P; Marsch, Lisa A; Poldrack, Russell A

    2018-02-01

    Self-regulation is a broad construct representing the general ability to recruit cognitive, motivational and emotional resources to achieve long-term goals. This construct has been implicated in a host of health-risk behaviors, and is a promising target for fostering beneficial behavior change. Despite its clear importance, the behavioral, psychological and neural components of self-regulation remain poorly understood, which contributes to theoretical inconsistencies and hinders maximally effective intervention development. We outline a research program that seeks to define a neuropsychological ontology of self-regulation, articulating the cognitive components that compose self-regulation, their relationships, and their associated measurements. The ontology will be informed by two large-scale approaches to assessing individual differences: first purely behaviorally using data collected via Amazon's Mechanical Turk, then coupled with neuroimaging data collected from a separate population. To validate the ontology and demonstrate its utility, we will then use it to contextualize health risk behaviors in two exemplar behavioral groups: overweight/obese adults who binge eat and smokers. After identifying ontological targets that precipitate maladaptive behavior, we will craft interventions that engage these targets. If successful, this work will provide a structured, holistic account of self-regulation in the form of an explicit ontology, which will better clarify the pattern of deficits related to maladaptive health behavior, and provide direction for more effective behavior change interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Pointwise probability reinforcements for robust statistical inference.

    Science.gov (United States)

    Frénay, Benoît; Verleysen, Michel

    2014-02-01

    Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Uncertainty the soul of modeling, probability & statistics

    CERN Document Server

    Briggs, William

    2016-01-01

    This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance". The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, suc...

  9. Python for probability, statistics, and machine learning

    CERN Document Server

    Unpingco, José

    2016-01-01

    This book covers the key ideas that link probability, statistics, and machine learning illustrated using Python modules in these areas. The entire text, including all the figures and numerical results, is reproducible using the Python codes and their associated Jupyter/IPython notebooks, which are provided as supplementary downloads. The author develops key intuitions in machine learning by working meaningful examples using multiple analytical methods and Python codes, thereby connecting theoretical concepts to concrete implementations. Modern Python modules like Pandas, Sympy, and Scikit-learn are applied to simulate and visualize important machine learning concepts like the bias/variance trade-off, cross-validation, and regularization. Many abstract mathematical ideas, such as convergence in probability theory, are developed and illustrated with numerical examples. This book is suitable for anyone with an undergraduate-level exposure to probability, statistics, or machine learning and with rudimentary knowl...

  10. Fixation probability on clique-based graphs

    Science.gov (United States)

    Choi, Jeong-Ok; Yu, Unjong

    2018-02-01

    The fixation probability of a mutant in the evolutionary dynamics of Moran process is calculated by the Monte-Carlo method on a few families of clique-based graphs. It is shown that the complete suppression of fixation can be realized with the generalized clique-wheel graph in the limit of small wheel-clique ratio and infinite size. The family of clique-star is an amplifier, and clique-arms graph changes from amplifier to suppressor as the fitness of the mutant increases. We demonstrate that the overall structure of a graph can be more important to determine the fixation probability than the degree or the heat heterogeneity. The dependence of the fixation probability on the position of the first mutant is discussed.

  11. Prospect evaluation as a function of numeracy and probability denominator.

    Science.gov (United States)

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Establishment probability in newly founded populations

    Directory of Open Access Journals (Sweden)

    Gusset Markus

    2012-06-01

    Full Text Available Abstract Background Establishment success in newly founded populations relies on reaching the established phase, which is defined by characteristic fluctuations of the population’s state variables. Stochastic population models can be used to quantify the establishment probability of newly founded populations; however, so far no simple but robust method for doing so existed. To determine a critical initial number of individuals that need to be released to reach the established phase, we used a novel application of the “Wissel plot”, where –ln(1 – P0(t is plotted against time t. This plot is based on the equation P0t=1–c1e–ω1t, which relates the probability of extinction by time t, P0(t, to two constants: c1 describes the probability of a newly founded population to reach the established phase, whereas ω1 describes the population’s probability of extinction per short time interval once established. Results For illustration, we applied the method to a previously developed stochastic population model of the endangered African wild dog (Lycaon pictus. A newly founded population reaches the established phase if the intercept of the (extrapolated linear parts of the “Wissel plot” with the y-axis, which is –ln(c1, is negative. For wild dogs in our model, this is the case if a critical initial number of four packs, consisting of eight individuals each, are released. Conclusions The method we present to quantify the establishment probability of newly founded populations is generic and inferences thus are transferable to other systems across the field of conservation biology. In contrast to other methods, our approach disaggregates the components of a population’s viability by distinguishing establishment from persistence.

  13. Probably not future prediction using probability and statistical inference

    CERN Document Server

    Dworsky, Lawrence N

    2008-01-01

    An engaging, entertaining, and informative introduction to probability and prediction in our everyday lives Although Probably Not deals with probability and statistics, it is not heavily mathematical and is not filled with complex derivations, proofs, and theoretical problem sets. This book unveils the world of statistics through questions such as what is known based upon the information at hand and what can be expected to happen. While learning essential concepts including "the confidence factor" and "random walks," readers will be entertained and intrigued as they move from chapter to chapter. Moreover, the author provides a foundation of basic principles to guide decision making in almost all facets of life including playing games, developing winning business strategies, and managing personal finances. Much of the book is organized around easy-to-follow examples that address common, everyday issues such as: How travel time is affected by congestion, driving speed, and traffic lights Why different gambling ...

  14. Public involvement in multi-objective water level regulation development projects-evaluating the applicability of public involvement methods

    International Nuclear Information System (INIS)

    Vaentaenen, Ari; Marttunen, Mika

    2005-01-01

    Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects

  15. Involvement of a chromatin modifier in response to mono-(2-ethylhexyl) phthalate (MEHP)-induced Sertoli cell injury: Probably an indirect action via the regulation of NFκB/FasL circuitry

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Shiwei [Department of Urology, 174th Hospital of PLA, Fujian 361001 (China); Dong, Yushu [Department of Neurosurgery, 463rd Hospital of PLA, Shenyang 110042 (China); Xu, Chun; Jiang, Liming; Chen, Yongjie; Jiang, Cheng [Department of Urology, 174th Hospital of PLA, Fujian 361001 (China); Hou, Wugang, E-mail: gangwuhou@163.com [Department of Anesthesiology, Xijing Hospital, Fourth Military Medical University, Xi’an 710032 (China); Li, Wei, E-mail: liweipepeyato@163.com [Department of Human Anatomy, Histology and Embryology, Fourth Military Medical University, Xi’an 710032 (China)

    2013-11-01

    Highlights: •MTA1 expression is upregulated in SCs upon MEHP treatment. •Knockdown of MTA1 in SCs impairs the MEHP-induced NFκB signaling activation. •Knockdown of MTA1 inhibits recruitment of NFκB onto FasL promoter in MEHP-treated SCs. -- Abstract: The Fas/FasL signaling pathway, controlled by nuclear factor-κB (NFκB) at the transcriptional level, is critical for triggering germ cell apoptosis in response to mono-(2-ethylhexyl) phthalate (MEHP)-induced Sertoli cell (SC) injury, but the exact regulation mechanism remain unknown. Here, we discovered that expression level of Metastasis associated protein 1 (MTA1), a component of the Mi-2/nucleosome remodeling and deacetylase complex, was upregulated in SCs during the early recovery after MEHP exposure. This expression change was in line with the dynamic changes in germ cell apoptosis in response to MEHP treatment. Furthermore, a knockdown of MTA1 by RNAi in SCs was found to impair the MEHP-induced early activation of NFκB pathway and abolish the recruitment of NFκB onto FasL promoter, which consequently diminished the MEHP-triggered FasL induction. Considering that Fas/FasL is a well characterized apoptosis initiating signaling during SCs injury, our results point to a potential “switch on” effect of MTA1, which may govern the activation of NFκB/FasL cascade in MEHP-insulted SCs. Overall, the MTA1/NFκB/FasL circuit may serve as an important defensive/repairing mechanism to help to control the germ cell quality after SCs injury.

  16. Involvement of a chromatin modifier in response to mono-(2-ethylhexyl) phthalate (MEHP)-induced Sertoli cell injury: Probably an indirect action via the regulation of NFκB/FasL circuitry

    International Nuclear Information System (INIS)

    Chen, Shiwei; Dong, Yushu; Xu, Chun; Jiang, Liming; Chen, Yongjie; Jiang, Cheng; Hou, Wugang; Li, Wei

    2013-01-01

    Highlights: •MTA1 expression is upregulated in SCs upon MEHP treatment. •Knockdown of MTA1 in SCs impairs the MEHP-induced NFκB signaling activation. •Knockdown of MTA1 inhibits recruitment of NFκB onto FasL promoter in MEHP-treated SCs. -- Abstract: The Fas/FasL signaling pathway, controlled by nuclear factor-κB (NFκB) at the transcriptional level, is critical for triggering germ cell apoptosis in response to mono-(2-ethylhexyl) phthalate (MEHP)-induced Sertoli cell (SC) injury, but the exact regulation mechanism remain unknown. Here, we discovered that expression level of Metastasis associated protein 1 (MTA1), a component of the Mi-2/nucleosome remodeling and deacetylase complex, was upregulated in SCs during the early recovery after MEHP exposure. This expression change was in line with the dynamic changes in germ cell apoptosis in response to MEHP treatment. Furthermore, a knockdown of MTA1 by RNAi in SCs was found to impair the MEHP-induced early activation of NFκB pathway and abolish the recruitment of NFκB onto FasL promoter, which consequently diminished the MEHP-triggered FasL induction. Considering that Fas/FasL is a well characterized apoptosis initiating signaling during SCs injury, our results point to a potential “switch on” effect of MTA1, which may govern the activation of NFκB/FasL cascade in MEHP-insulted SCs. Overall, the MTA1/NFκB/FasL circuit may serve as an important defensive/repairing mechanism to help to control the germ cell quality after SCs injury

  17. Alternative Method of Solution of the Regulator Equation: L2 -Space Approach

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2012-01-01

    Roč. 14, č. 4 (2012), s. 1150-1154 ISSN 1561-8625 R&D Projects: GA MŠk(CZ) LG12008 Institutional support: RVO:67985556 Keywords : Output regulation problem * partial differential equations Subject RIV: BC - Control Systems Theory Impact factor: 1.411, year: 2012 http://library.utia.cas.cz/separaty/2012/TR/rehak-0381625.pdf

  18. Using Self-Regulated Learning Methods to Increase Native American College Retention

    Science.gov (United States)

    Patterson, David A.; Ahuna, Kelly H.; Tinnesz, Christine Gray; Vanzile-Tamsen, Carol

    2014-01-01

    A big challenge facing colleges and university programs across the United States is retaining students to graduation. This is especially the case for Native American students, who have had one of the highest dropout rates over the past several decades. Using data from a large university that implemented a self-regulated learning course for…

  19. Annihilation probability density and other applications of the Schwinger multichannel method to the positron and electron scattering; Densidade de probabilidade de aniquilacao e outras aplicacoes do metodo multicanal de Schwinger ao espalhamento de positrons e eletrons

    Energy Technology Data Exchange (ETDEWEB)

    Varella, Marcio Teixeira do Nascimento

    2001-12-15

    We have calculated annihilation probability densities (APD) for positron collisions against He atom and H{sub 2} molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10{sup -2} eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e{sup +}-H{sub 2} collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Z{sub eff} ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e{sup -} -H{sub 2}O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)

  20. What is the best practice for benchmark regulation of electricity distribution? Comparison of DEA, SFA and StoNED methods

    International Nuclear Information System (INIS)

    Kuosmanen, Timo; Saastamoinen, Antti; Sipiläinen, Timo

    2013-01-01

    Electricity distribution is a natural local monopoly. In many countries, the regulators of this sector apply frontier methods such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) to estimate the efficient cost of operation. In Finland, a new StoNED method was adopted in 2012. This paper compares DEA, SFA and StoNED in the context of regulating electricity distribution. Using data from Finland, we compare the impacts of methodological choices on cost efficiency estimates and acceptable cost. While the efficiency estimates are highly correlated, the cost targets reveal major differences. In addition, we examine performance of the methods by Monte Carlo simulations. We calibrate the data generation process (DGP) to closely match the empirical data and the model specification of the regulator. We find that the StoNED estimator yields a root mean squared error (RMSE) of 4% with the sample size 100. Precision improves as the sample size increases. The DEA estimator yields an RMSE of approximately 10%, but performance deteriorates as the sample size increases. The SFA estimator has an RMSE of 144%. The poor performance of SFA is due to the wrong functional form and multicollinearity. - Highlights: • We compare DEA, SFA and StoNED methods in the context of regulation of electricity distribution. • Both empirical comparisons and Monte Carlo simulations are presented. • Choice of benchmarking method has a significant economic impact on the regulatory outcomes. • StoNED yields the most precise results in the Monte Carlo simulations. • Five lessons concerning heterogeneity, noise, frontier, simulations, and implementation

  1. Probability theory a foundational course

    CERN Document Server

    Pakshirajan, R P

    2013-01-01

    This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.

  2. VIBRATION ISOLATION SYSTEM PROBABILITY ANALYSIS

    Directory of Open Access Journals (Sweden)

    Smirnov Vladimir Alexandrovich

    2012-10-01

    Full Text Available The article deals with the probability analysis for a vibration isolation system of high-precision equipment, which is extremely sensitive to low-frequency oscillations even of submicron amplitude. The external sources of low-frequency vibrations may include the natural city background or internal low-frequency sources inside buildings (pedestrian activity, HVAC. Taking Gauss distribution into account, the author estimates the probability of the relative displacement of the isolated mass being still lower than the vibration criteria. This problem is being solved in the three dimensional space, evolved by the system parameters, including damping and natural frequency. According to this probability distribution, the chance of exceeding the vibration criteria for a vibration isolation system is evaluated. Optimal system parameters - damping and natural frequency - are being developed, thus the possibility of exceeding vibration criteria VC-E and VC-D is assumed to be less than 0.04.

  3. Model uncertainty: Probabilities for models?

    International Nuclear Information System (INIS)

    Winkler, R.L.

    1994-01-01

    Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

  4. Probability, Statistics, and Stochastic Processes

    CERN Document Server

    Olofsson, Peter

    2011-01-01

    A mathematical and intuitive approach to probability, statistics, and stochastic processes This textbook provides a unique, balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. This text combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The text begins with three chapters that d

  5. Probability, statistics, and queueing theory

    CERN Document Server

    Allen, Arnold O

    1990-01-01

    This is a textbook on applied probability and statistics with computer science applications for students at the upper undergraduate level. It may also be used as a self study book for the practicing computer science professional. The successful first edition of this book proved extremely useful to students who need to use probability, statistics and queueing theory to solve problems in other fields, such as engineering, physics, operations research, and management science. The book has also been successfully used for courses in queueing theory for operations research students. This second edit

  6. Probability and Statistics: 5 Questions

    DEFF Research Database (Denmark)

    Probability and Statistics: 5 Questions is a collection of short interviews based on 5 questions presented to some of the most influential and prominent scholars in probability and statistics. We hear their views on the fields, aims, scopes, the future direction of research and how their work fits...... in these respects. Interviews with Nick Bingham, Luc Bovens, Terrence L. Fine, Haim Gaifman, Donald Gillies, James Hawthorne, Carl Hoefer, James M. Joyce, Joseph B. Kadane Isaac Levi, D.H. Mellor, Patrick Suppes, Jan von Plato, Carl Wagner, Sandy Zabell...

  7. Some simple applications of probability models to birth intervals

    International Nuclear Information System (INIS)

    Shrestha, G.

    1987-07-01

    An attempt has been made in this paper to apply some simple probability models to birth intervals under the assumption of constant fecundability and varying fecundability among women. The parameters of the probability models are estimated by using the method of moments and the method of maximum likelihood. (author). 9 refs, 2 tabs

  8. A Regulation-Based Security Evaluation Method for Data Link in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Claudio S. Malavenda

    2014-01-01

    Full Text Available This article presents a novel approach to the analysis of wireless sensor networks (WSN security, based on the regulations intended for wireless communication devices. Starting from the analysis and classification of attacks, countermeasures, and available protocols, we present the current state on secure communication stacks for embedded systems. The regulation analysis is based on civil EN 50150 and MIL STD-188-220, both applicable to WSN communications. Afterwards, starting from a list of known WSN attacks, we use a correspondence table to match WSN attacks with countermeasures required by regulations. This approach allows us to produce a precise security evaluation and classification methodology for WSN protocols. The results show that current protocols do not present a complete coverage of security issues. While this conclusion is already known for many WSN protocols, to the best of our knowledge this is the first time a complete methodology is proposed to base this assertion. Moreover, by using the proposed methodology, we are able to precisely identify the exposed threats for each WSN protocol under analysis.

  9. Dynamic SEP event probability forecasts

    Science.gov (United States)

    Kahler, S. W.; Ling, A.

    2015-10-01

    The forecasting of solar energetic particle (SEP) event probabilities at Earth has been based primarily on the estimates of magnetic free energy in active regions and on the observations of peak fluxes and fluences of large (≥ M2) solar X-ray flares. These forecasts are typically issued for the next 24 h or with no definite expiration time, which can be deficient for time-critical operations when no SEP event appears following a large X-ray flare. It is therefore important to decrease the event probability forecast with time as a SEP event fails to appear. We use the NOAA listing of major (≥10 pfu) SEP events from 1976 to 2014 to plot the delay times from X-ray peaks to SEP threshold onsets as a function of solar source longitude. An algorithm is derived to decrease the SEP event probabilities with time when no event is observed to reach the 10 pfu threshold. In addition, we use known SEP event size distributions to modify probability forecasts when SEP intensity increases occur below the 10 pfu event threshold. An algorithm to provide a dynamic SEP event forecast, Pd, for both situations of SEP intensities following a large flare is derived.

  10. Conditional Independence in Applied Probability.

    Science.gov (United States)

    Pfeiffer, Paul E.

    This material assumes the user has the background provided by a good undergraduate course in applied probability. It is felt that introductory courses in calculus, linear algebra, and perhaps some differential equations should provide the requisite experience and proficiency with mathematical concepts, notation, and argument. The document is…

  11. Stretching Probability Explorations with Geoboards

    Science.gov (United States)

    Wheeler, Ann; Champion, Joe

    2016-01-01

    Students are faced with many transitions in their middle school mathematics classes. To build knowledge, skills, and confidence in the key areas of algebra and geometry, students often need to practice using numbers and polygons in a variety of contexts. Teachers also want students to explore ideas from probability and statistics. Teachers know…

  12. GPS: Geometry, Probability, and Statistics

    Science.gov (United States)

    Field, Mike

    2012-01-01

    It might be said that for most occupations there is now less of a need for mathematics than there was say fifty years ago. But, the author argues, geometry, probability, and statistics constitute essential knowledge for everyone. Maybe not the geometry of Euclid, but certainly geometrical ways of thinking that might enable us to describe the world…

  13. Probability and statistics: A reminder

    International Nuclear Information System (INIS)

    Clement, B.

    2013-01-01

    The main purpose of these lectures is to provide the reader with the tools needed to data analysis in the framework of physics experiments. Basic concepts are introduced together with examples of application in experimental physics. The lecture is divided into two parts: probability and statistics. It is build on the introduction from 'data analysis in experimental sciences' given in [1]. (authors)

  14. Nash equilibrium with lower probabilities

    DEFF Research Database (Denmark)

    Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1998-01-01

    We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible...

  15. On probability-possibility transformations

    Science.gov (United States)

    Klir, George J.; Parviz, Behzad

    1992-01-01

    Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.

  16. Introduction to tensorial resistivity probability tomography

    OpenAIRE

    Mauriello, Paolo; Patella, Domenico

    2005-01-01

    The probability tomography approach developed for the scalar resistivity method is here extended to the 2D tensorial apparent resistivity acquisition mode. The rotational invariant derived from the trace of the apparent resistivity tensor is considered, since it gives on the datum plane anomalies confined above the buried objects. Firstly, a departure function is introduced as the difference between the tensorial invariant measured over the real structure and that computed for a reference uni...

  17. Probability and statistics in particle physics

    International Nuclear Information System (INIS)

    Frodesen, A.G.; Skjeggestad, O.

    1979-01-01

    Probability theory is entered into at an elementary level and given a simple and detailed exposition. The material on statistics has been organised with an eye to the experimental physicist's practical need, which is likely to be statistical methods for estimation or decision-making. The book is intended for graduate students and research workers in experimental high energy and elementary particle physics, and numerous examples from these fields are presented. (JIW)

  18. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  19. The method of using current regulations and standards in designing management and technologies of construction

    Directory of Open Access Journals (Sweden)

    Sinenko Sergey

    2017-01-01

    Full Text Available Economic efficiency of using funds in construction of buildings and structures begins from an effective, proper design, which is based on modern, cutting-edge, advanced equipment, streamlined organization and process solutions of construction operations. In the light of this it is considered application of “Self-Organization and Technology of Building” multifunctional software package capable of solving various engineering tasks on designing management of construction operations in accordance with the applicable rules and regulations in view of using software for automatic generation of workflow. Implementing this software in the construction management processes may help to solve management tasks at the construction site.

  20. Quantitative method to determine the regional drinking water odorant regulation goals based on odor sensitivity distribution: illustrated using 2-MIB.

    Science.gov (United States)

    Yu, Jianwei; An, Wei; Cao, Nan; Yang, Min; Gu, Junong; Zhang, Dong; Lu, Ning

    2014-07-01

    Taste and odor (T/O) in drinking water often cause consumer complaints and are thus regulated in many countries. However, people in different regions may exhibit different sensitivities toward T/O. This study proposed a method to determine the regional drinking water odorant regulation goals (ORGs) based on the odor sensitivity distribution of the local population. The distribution of odor sensitivity to 2-methylisoborneol (2-MIB) by the local population in Beijing, China was revealed by using a normal distribution function/model to describe the odor complaint response to a 2-MIB episode in 2005, and a 2-MIB concentration of 12.9 ng/L and FPA (flavor profile analysis) intensity of 2.5 was found to be the critical point to cause odor complaints. Thus the Beijing ORG for 2-MIB was determined to be 12.9 ng/L. Based on the assumption that the local FPA panel can represent the local population in terms of sensitivity to odor, and that the critical FPA intensity causing odor complaints was 2.5, this study tried to determine the ORGs for seven other cities of China by performing FPA tests using an FPA panel from the corresponding city. ORG values between 12.9 and 31.6 ng/L were determined, showing that a unified ORG may not be suitable for drinking water odor regulations. This study presents a novel approach for setting drinking water odor regulations. Copyright © 2014. Published by Elsevier B.V.

  1. The effects of emotion regulation on explicit memory depend on strategy and testing method.

    Science.gov (United States)

    Knight, Marisa; Ponzio, Allison

    2013-12-01

    Although previous work has shown that emotion regulation strategies can influence memory, the mechanisms through which different strategies produce different memory outcomes are not well understood. We examined how two cognitive reappraisal strategies with similar elaboration demands but diverging effects on visual attention and emotional arousal influenced explicit memory for emotional stimuli and for the strategies used to evaluate the stimuli. At encoding, participants used reappraisal to increase and decrease the personal relevance of neutral and emotional pictures. In two experiments, recall accuracy was highest for emotional pictures featured on increase trials, intermediate for emotional pictures featured on look (respond naturally) trials, and lowest for emotional pictures featured on decrease trials. This recall pattern emerged after a short delay (15 min) and persisted over a longer delay (48 hr). Memory accuracy for the strategies used to evaluate the pictures showed a different pattern: Strategy memory was better for emotional pictures featured on decrease and increase trials than for pictures featured on look trials. Our findings show that the effects of emotion regulation on memory depend both on the particular strategy engaged and the particular aspect of memory being tested.

  2. A discussion on the origin of quantum probabilities

    International Nuclear Information System (INIS)

    Holik, Federico; Sáenz, Manuel; Plastino, Angel

    2014-01-01

    We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivation of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases

  3. Probability matching and strategy availability.

    Science.gov (United States)

    Koehler, Derek J; James, Greta

    2010-09-01

    Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.

  4. Probability as a Physical Motive

    Directory of Open Access Journals (Sweden)

    Peter Martin

    2007-04-01

    Full Text Available Recent theoretical progress in nonequilibrium thermodynamics, linking thephysical principle of Maximum Entropy Production (“MEP” to the information-theoretical“MaxEnt” principle of scientific inference, together with conjectures from theoreticalphysics that there may be no fundamental causal laws but only probabilities for physicalprocesses, and from evolutionary theory that biological systems expand “the adjacentpossible” as rapidly as possible, all lend credence to the proposition that probability shouldbe recognized as a fundamental physical motive. It is further proposed that spatial order andtemporal order are two aspects of the same thing, and that this is the essence of the secondlaw of thermodynamics.

  5. Logic, Probability, and Human Reasoning

    Science.gov (United States)

    2015-01-01

    accordingly suggest a way to integrate probability and deduction. The nature of deductive reasoning To be rational is to be able to make deductions...3–6] and they underlie mathematics, science, and tech- nology [7–10]. Plato claimed that emotions upset reason- ing. However, individuals in the grip...fundamental to human rationality . So, if counterexamples to its principal predictions occur, the theory will at least explain its own refutation

  6. Probability Measures on Groups IX

    CERN Document Server

    1989-01-01

    The latest in this series of Oberwolfach conferences focussed on the interplay between structural probability theory and various other areas of pure and applied mathematics such as Tauberian theory, infinite-dimensional rotation groups, central limit theorems, harmonizable processes, and spherical data. Thus it was attended by mathematicians whose research interests range from number theory to quantum physics in conjunction with structural properties of probabilistic phenomena. This volume contains 5 survey articles submitted on special invitation and 25 original research papers.

  7. Probability matching and strategy availability

    OpenAIRE

    J. Koehler, Derek; Koehler, Derek J.; James, Greta

    2010-01-01

    Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought...

  8. Internal Medicine residents use heuristics to estimate disease probability

    OpenAIRE

    Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin

    2015-01-01

    Background: Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. Method: We randomized 55 In...

  9. Current situation on regulations for mycotoxins. Overview of tolerances and status of standard methods of sampling and analysis.

    Science.gov (United States)

    Van Egmond, H P

    1989-01-01

    A worldwide enquiry was undertaken in 1986-1987 to obtain up-to-date information about mycotoxin legislation in as many countries of the world as possible. Together with some additional data collected in 1981, information is now available about planned, proposed, existing or absence of legislation in 66 countries. Details about tolerances, legal bases, responsible authorities, prescribed methods of sampling and analysis and disposition of commodities containing inadmissible amounts of mycotoxins, are given. The information concerns aflatoxins in foodstuffs, aflatoxin M1 in dairy products, aflatoxins in animal feedstuffs, and other mycotoxins in food- and feedstuffs. In comparison with the situation in 1981, limits and regulations for mycotoxins have been expanded in 1987 with more countries having legislation (proposed or passed) on the subject, more products, and more mycotoxins covered by this legislation. The differences between tolerances in the various countries are sometimes quite large, which makes harmonization of mycotoxin regulations highly desirable.

  10. Modulation Based on Probability Density Functions

    Science.gov (United States)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  11. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  12. Sensitivity of the probability of failure to probability of detection curve regions

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2016-01-01

    Non-destructive inspection (NDI) techniques have been shown to play a vital role in fracture control plans, structural health monitoring, and ensuring availability and reliability of piping, pressure vessels, mechanical and aerospace equipment. Probabilistic fatigue simulations are often used in order to determine the efficacy of an inspection procedure with the NDI method modeled as a probability of detection (POD) curve. These simulations can be used to determine the most advantageous NDI method for a given application. As an aid to this process, a first order sensitivity method of the probability-of-failure (POF) with respect to regions of the POD curve (lower tail, middle region, right tail) is developed and presented here. The sensitivity method computes the partial derivative of the POF with respect to a change in each region of a POD or multiple POD curves. The sensitivities are computed at no cost by reusing the samples from an existing Monte Carlo (MC) analysis. A numerical example is presented considering single and multiple inspections. - Highlights: • Sensitivities of probability-of-failure to a region of probability-of-detection curve. • The sensitivities are computed with negligible cost. • Sensitivities identify the important region of a POD curve. • Sensitivities can be used as a guide to selecting the optimal POD curve.

  13. Foundations of quantization for probability distributions

    CERN Document Server

    Graf, Siegfried

    2000-01-01

    Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.

  14. Approaches to Evaluating Probability of Collision Uncertainty

    Science.gov (United States)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  15. Probability and statistics for particle physics

    CERN Document Server

    Mana, Carlos

    2017-01-01

    This book comprehensively presents the basic concepts of probability and Bayesian inference with sufficient generality to make them applicable to current problems in scientific research. The first chapter provides the fundamentals of probability theory that are essential for the analysis of random phenomena. The second chapter includes a full and pragmatic review of the Bayesian methods that constitute a natural and coherent framework with enough freedom to analyze all the information available from experimental data in a conceptually simple manner. The third chapter presents the basic Monte Carlo techniques used in scientific research, allowing a large variety of problems to be handled difficult to tackle by other procedures. The author also introduces a basic algorithm, which enables readers to simulate samples from simple distribution, and describes useful cases for researchers in particle physics.The final chapter is devoted to the basic ideas of Information Theory, which are important in the Bayesian me...

  16. Control Method Based on Demand Response Needs of Isolated Bus Regulation with Series-Resonant Converters for Residential Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Shu-Huai Zhang

    2017-05-01

    Full Text Available Considering the effects of isolation and high efficiency, a series-resonant DC-DC converter (L-L-C type, with two inductors and a capacitor has been introduced into a residential photovoltaic (PV generation and storage system in this work, and a voltage gain curve upwarp drifting problem was found. In this paper, the reason of upwarp drifting in the voltage gain curve is given, and a new changing topological control method to solve the voltage regulation problem under light load conditions is proposed. Firstly, the ideal and actual first harmonic approximation (FHA models are given, and this drifting problem is ascribed to the multiple peaks of higher-order resonance between resonant tank and parasitic capacitors. Then the paper presents the pulse-frequency-modulation (PFM driver signals control method to translate the full-bridge LLC into a half-bridge LLC converter, and with this method the voltage gain could easily be reduced by half. Based on this method, the whole voltage and resonant current sharing control methods in on-line and off-line mode are proposed. The parameters design and optimization methods are also discussed in detail. Finally, a residential PV system platform based on the proposed parallel 7-kW full-bridge LLC converter is built to verify the proposed control method and theoretical analysis.

  17. A peaking-regulation-balance-based method for wind & PV power integrated accommodation

    Science.gov (United States)

    Zhang, Jinfang; Li, Nan; Liu, Jun

    2018-02-01

    Rapid development of China’s new energy in current and future should be focused on cooperation of wind and PV power. Based on the analysis of system peaking balance, combined with the statistical features of wind and PV power output characteristics, a method of comprehensive integrated accommodation analysis of wind and PV power is put forward. By the electric power balance during night peaking load period in typical day, wind power installed capacity is determined firstly; then PV power installed capacity could be figured out by midday peak load hours, which effectively solves the problem of uncertainty when traditional method hard determines the combination of the wind and solar power simultaneously. The simulation results have validated the effectiveness of the proposed method.

  18. [Biometric bases: basic concepts of probability calculation].

    Science.gov (United States)

    Dinya, E

    1998-04-26

    The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

  19. Turbulent combustion modelization via a tabulation method of detailed kinetic chemistry coupled to Probability Density Function. Application to aeronautical engines; Modelisation de la combustion turbulente via une methode tabulation de la cinetique chimique detaillee couplee a des fonctions densites de probabilite. Application aux foyers aeronautiques

    Energy Technology Data Exchange (ETDEWEB)

    Rullaud, M

    2004-06-01

    A new modelization of turbulent combustion is proposed with detailed chemistry and probability density functions (PDFs). The objective is to capture temperature and species concentrations, mainly the CO. The PCM-FTC model, Presumed Conditional Moment - Flame Tabulated Chemistry, is based on the tabulation of laminar premixed and diffusion flames to capture partial pre-mixing present in aeronautical engines. The presumed PDFs is introduced to predict averaged values. The tabulation method is based on the analysis of the chemical structure of laminar premixed and diffusion flames. Hypothesis are presented, tested and validated with Sandia experimental data jet flames. Then, the model is introduced in a turbulent flow simulation software. Three configurations are retained to quantify the level of prediction of this formulation: the D and F-Flames of Sandia and lifted jet flames of methane/air of Stanford. A good agreement is observed between experiments and simulations. The validity of this method is then demonstrated. (author)

  20. Comparison of biosurfactant detection methods reveals hydrophobic surfactants and contact-regulated production

    Science.gov (United States)

    Biosurfactants are diverse molecules with numerous biological functions and industrial applications. A variety of environments were examined for biosurfactant-producing bacteria using a versatile new screening method. The utility of an atomized oil assay was assessed for a large number of bacteria...