WorldWideScience

Sample records for carlo likelihood analysis

  1. Monte Carlo likelihood inference for missing data models

    OpenAIRE

    Sung, Yun Ju; Geyer, Charles J.

    2007-01-01

    We describe a Monte Carlo method to approximate the maximum likelihood estimate (MLE), when there are missing data and the observed data likelihood is not available in closed form. This method uses simulated missing data that are independent and identically distributed and independent of the observed data. Our Monte Carlo approximation to the MLE is a consistent and asymptotically normal estimate of the minimizer θ* of the Kullback–Leibler information, as both Monte Carlo and observed data sa...

  2. Monte Carlo maximum likelihood estimation for discretely observed diffusion processes

    OpenAIRE

    Beskos, Alexandros; Papaspiliopoulos, Omiros; Roberts, Gareth

    2009-01-01

    This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s.\\@ continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte C...

  3. Likelihood Analysis for Mega Pixel Maps

    Science.gov (United States)

    Kogut, Alan J.

    1999-01-01

    The derivation of cosmological parameters from astrophysical data sets routinely involves operations counts which scale as O(N(exp 3) where N is the number of data points. Currently planned missions, including MAP and Planck, will generate sky maps with N(sub d) = 10(exp 6) or more pixels. Simple "brute force" analysis, applied to such mega-pixel data, would require years of computing even on the fastest computers. We describe an algorithm which allows estimation of the likelihood function in the direct pixel basis. The algorithm uses a conjugate gradient approach to evaluate X2 and a geometric approximation to evaluate the determinant. Monte Carlo simulations provide a correction to the determinant, yielding an unbiased estimate of the likelihood surface in an arbitrary region surrounding the likelihood peak. The algorithm requires O(N(sub d)(exp 3/2) operations and O(Nd) storage for each likelihood evaluation, and allows for significant parallel computation.

  4. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    Science.gov (United States)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  5. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    International Nuclear Information System (INIS)

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user

  6. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777

  7. Likelihood analysis of parity violation in the compound nucleus

    International Nuclear Information System (INIS)

    We discuss the determination of the root mean-squared matrix element of the parity-violating interaction between compound-nuclear states using likelihood analysis. We briefly review the relevant features of the statistical model of the compound nucleus and the formalism of likelihood analysis. We then discuss the application of likelihood analysis to data on panty-violating longitudinal asymmetries. The reliability of the extracted value of the matrix element and errors assigned to the matrix element is stressed. We treat the situations where the spins of the p-wave resonances are not known and known using experimental data and Monte Carlo techniques. We conclude that likelihood analysis provides a reliable way to determine M and its confidence interval. We briefly discuss some problems associated with the normalization of the likelihood function

  8. Empirical likelihood method in survival analysis

    CERN Document Server

    Zhou, Mai

    2015-01-01

    Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric

  9. Likelihood Analysis of Seasonal Cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren; Schaumburg, Ernst

    1999-01-01

    The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...

  10. Seasonal transmission potential and activity peaks of the new influenza A(H1N1: a Monte Carlo likelihood analysis based on human mobility

    Directory of Open Access Journals (Sweden)

    Paolotti Daniela

    2009-09-01

    Full Text Available Abstract Background On 11 June the World Health Organization officially raised the phase of pandemic alert (with regard to the new H1N1 influenza strain to level 6. As of 19 July, 137,232 cases of the H1N1 influenza strain have been officially confirmed in 142 different countries, and the pandemic unfolding in the Southern hemisphere is now under scrutiny to gain insights about the next winter wave in the Northern hemisphere. A major challenge is pre-empted by the need to estimate the transmission potential of the virus and to assess its dependence on seasonality aspects in order to be able to use numerical models capable of projecting the spatiotemporal pattern of the pandemic. Methods In the present work, we use a global structured metapopulation model integrating mobility and transportation data worldwide. The model considers data on 3,362 subpopulations in 220 different countries and individual mobility across them. The model generates stochastic realizations of the epidemic evolution worldwide considering 6 billion individuals, from which we can gather information such as prevalence, morbidity, number of secondary cases and number and date of imported cases for each subpopulation, all with a time resolution of 1 day. In order to estimate the transmission potential and the relevant model parameters we used the data on the chronology of the 2009 novel influenza A(H1N1. The method is based on the maximum likelihood analysis of the arrival time distribution generated by the model in 12 countries seeded by Mexico by using 1 million computationally simulated epidemics. An extended chronology including 93 countries worldwide seeded before 18 June was used to ascertain the seasonality effects. Results We found the best estimate R0 = 1.75 (95% confidence interval (CI 1.64 to 1.88 for the basic reproductive number. Correlation analysis allows the selection of the most probable seasonal behavior based on the observed pattern, leading to the

  11. Generalized likelihood uncertainty estimation (GLUE) using adaptive Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik;

    2008-01-01

    estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty...... within the context of Monte Carlo (MC) analysis coupled with Bayesian estimation and propagation of uncertainty. Because of its flexibility, ease of implementation and its suitability for parallel implementation on distributed computer systems, the GLUE method has been used in a wide variety of...... applications. However, the MC based sampling strategy of the prior parameter space typically utilized in GLUE is not particularly efficient in finding behavioral simulations. This becomes especially problematic for high-dimensional parameter estimation problems, and in the case of complex simulation models...

  12. Likelihood analysis of large-scale flows

    CERN Document Server

    Jaffe, A; Jaffe, Andrew; Kaiser, Nick

    1994-01-01

    We apply a likelihood analysis to the data of Lauer & Postman 1994. With P(k) parametrized by (\\sigma_8, \\Gamma), the likelihood function peaks at \\sigma_8\\simeq0.9, \\Gamma\\simeq0.05, indicating at face value very strong large-scale power, though at a level incompatible with COBE. There is, however, a ridge of likelihood such that more conventional power spectra do not seem strongly disfavored. The likelihood calculated using as data only the components of the bulk flow solution peaks at higher \\sigma_8, as suggested by other analyses, but is rather broad. The likelihood incorporating both bulk flow and shear gives a different picture. The components of the shear are all low, and this pulls the peak to lower amplitudes as a compromise. The velocity data alone are therefore {\\em consistent} with models with very strong large scale power which generates a large bulk flow, but the small shear (which also probes fairly large scales) requires that the power would have to be at {\\em very} large scales, which is...

  13. cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    CERN Document Server

    Ishida, E E O; Penna-Lima, M; Cisewski, J; de Souza, R S; Trindade, A M M; Cameron, E

    2015-01-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present cosmoabc, a Python ABC sampler featuring a Population Monte Carlo (PMC) variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled cosmoabc with the numcosmo library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. cosmoabc is published under the GPLv3 license on PyPI and GitHub and documentation is availabl...

  14. Likelihood analysis of earthquake focal mechanism distributions

    CERN Document Server

    Kagan, Y Y

    2014-01-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...

  15. Likelihood analysis of the I(2) model

    DEFF Research Database (Denmark)

    Johansen, Søren

    1997-01-01

    The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum...

  16. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  17. Evaluating Network Models: A Likelihood Analysis

    CERN Document Server

    Wang, Wen-Qiang; Zhou, Tao

    2011-01-01

    Many models are put forward to mimic the evolution of real networked systems. A well-accepted way to judge the validity is to compare the modeling results with real networks subject to several structural features. Even for a specific real network, we cannot fairly evaluate the goodness of different models since there are too many structural features while there is no criterion to select and assign weights on them. Motivated by the studies on link prediction algorithms, we propose a unified method to evaluate the network models via the comparison of the likelihoods of the currently observed network driven by different models, with an assumption that the higher the likelihood is, the better the model is. We test our method on the real Internet at the Autonomous System (AS) level, and the results suggest that the Generalized Linear Preferential (GLP) model outperforms the Tel Aviv Network Generator (Tang), while both two models are better than the Barab\\'asi-Albert (BA) and Erd\\"os-R\\'enyi (ER) models. Our metho...

  18. Profile likelihood ratio analysis techniques for rare event signals

    CERN Document Server

    Billard, J

    2013-01-01

    The Cryogenic Dark Matter Search (CDMS) II uses crystals operated at milliKelvin temperature to search for dark matter. We present the details of the profile likelihood analysis of 140.2 kg-day exposure from the final data set of the CDMS II Si detectors that revealed three WIMP-candidate events. We found that this result favors a WIMP+background hypothesis over the known-background-only hypothesis at the 99.81% confidence level. This paper is dedicated to the description of the profile likelihood analysis dedicated to the CDMSII-Si data and discusses such analysis techniques in the scope of rare event searches.

  19. Maximum likelihood factor analysis of the reactor coolant pump system

    International Nuclear Information System (INIS)

    In today's operating environment of nuclear power plants, setpoints are established for key plant parameters, such as temperature, pressure, and flow rate. Reducing excursions beyond these setpoints would save millions of dollars as a result of improved plant availability and improve plant safety as well. The statistical method of maximum likelihood factor analysis is presented, and the results of two computer runs are given. The results of the statistical analysis indicate that it is possible to consistently rank order the eleven tracked variables of the reactor coolant system. Implementation of the maximum likelihood factor method would permit the decision maker to predict unanticipated transients and reduce plant unavailability

  20. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    Science.gov (United States)

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  1. MLE [Maximum Likelihood Estimator] reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    International Nuclear Information System (INIS)

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data

  2. Variations on KamLAND: likelihood analysis and frequentist confidence regions

    International Nuclear Information System (INIS)

    In this Letter the robustness of the first results from the KamLAND reactor neutrino experiment with respect to variations in the statistical analysis is considered. It is shown that an event-by-event based likelihood analysis provides a more powerful tool to extract information from the currently available data sample than a least-squares method based on energy binned data. Furthermore, a frequentist analysis of KamLAND data is performed. Confidence regions with correct coverage in the plane of the oscillation parameters are calculated by means of a Monte Carlo simulation. I find that the results of the usually adopted χ2-cut approximation are in reasonable agreement with the exact confidence regions, however, quantitative differences are detected. Finally, although the current data is consistent with an energy independent flux suppression, a ∼2σ indication in favour of oscillations can be stated, implying quantum mechanical interference over distances of the order of 200 km

  3. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    CERN Document Server

    Agnese, R; Balakishiyeva, D; Thakur, R Basu; Bauer, D A; Billard, J; Borgland, A; Bowles, M A; Brandt, D; Brink, P L; Bunker, R; Cabrera, B; Caldwell, D O; Cerdeno, D G; Chagani, H; Chen, Y; Cooley, J; Cornell, B; Crewdson, C H; Cushman, P; Daal, M; Di Stefano, P C F; Doughty, T; Esteban, L; Fallows, S; Figueroa-Feliciano, E; Fritts, M; Godfrey, G L; Golwala, S R; Graham, M; Hall, J; Harris, H R; Hertel, S A; Hofer, T; Holmgren, D; Hsu, L; Huber, M E; Jastram, A; Kamaev, O; Kara, B; Kelsey, M H; Kennedy, A; Kiveni, M; Koch, K; Leder, A; Loer, B; Asamar, E Lopez; Mahapatra, R; Mandic, V; Martinez, C; McCarthy, K A; Mirabolfathi, N; Moffatt, R A; Moore, D C; Nelson, R H; Oser, S M; Page, K; Page, W A; Partridge, R; Pepin, M; Phipps, A; Prasad, K; Pyle, M; Qiu, H; Rau, W; Redl, P; Reisetter, A; Ricci, Y; Rogers, H E; Saab, T; Sadoulet, B; Sander, J; Schneck, K; Schnee, R W; Scorza, S; Serfass, B; Shank, B; Speller, D; Upadhyayula, S; Villano, A N; Welliver, B; Wright, D H; Yellin, S; Yen, J J; Young, B A; Zhang, J

    2014-01-01

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search (CDMS~II) experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from $^{210}$Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  4. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    Science.gov (United States)

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  5. Towards integration of compositional risk analysis using Monte Carlo simulation and security testing

    OpenAIRE

    Viehmann, Johannes

    2014-01-01

    This short paper describes ongoing efforts to combine concepts of security risk analysis with security testing into a single process. Using risk analysis artefact composition and Monte Carlo simulation to calculate likelihood values, the method described here is intended to become applicable for complex large scale systems with dynamically changing probability values.

  6. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  7. ''No-background'' maximum likelihood analysis in HBT interferometry

    International Nuclear Information System (INIS)

    We present a new 'no-background' procedure, based on the maximum likelihood method, for fitting the space-time size parameters of the particle production region in ultra-relativistic heavy ion collisions. This procedure uses an approximation to avoid the necessity of constructing a mixed-event background before fitting the data. (orig.)

  8. ANALYSIS OF CONSUMER ATTITUDES TOWARD ORGANIC PRODUCE PURCHASE LIKELIHOOD

    OpenAIRE

    Byrne, Patrick J.; Toensmeyer, Ulrich C.; German, Carl L.; Muller, H. Reed

    1991-01-01

    This study demographically determines: which consumers are currently buying organic produce; consumer comparisons of organic and conventional produce; and consumer purchase likelihood of higher-priced organic produce. Data were collected from a Delaware consumer survey, dealing with fresh produce and food safety. Multinomial and ordered logit models were developed to generate marginal effects of age, gender, education, and income. Increasing age, males, and advancing education demonstrated po...

  9. LISA data analysis using Markov chain Monte Carlo methods

    International Nuclear Information System (INIS)

    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions

  10. Likelihood functions for the analysis of single-molecule binned photon sequences

    International Nuclear Information System (INIS)

    Graphical abstract: Folding of a protein with attached fluorescent dyes, the underlying conformational trajectory of interest, and the observed binned photon trajectory. Highlights: ►A sequence of photon counts can be analyzed using a likelihood function. ► The exact likelihood function for a two-state kinetic model is provided. ► Several approximations are considered for an arbitrary kinetic model. ► Improved likelihood functions are obtained to treat sequences of FRET efficiencies. - Abstract: We consider the analysis of a class of experiments in which the number of photons in consecutive time intervals is recorded. Sequence of photon counts or, alternatively, of FRET efficiencies can be studied using likelihood-based methods. For a kinetic model of the conformational dynamics and state-dependent Poisson photon statistics, the formalism to calculate the exact likelihood that this model describes such sequences of photons or FRET efficiencies is developed. Explicit analytic expressions for the likelihood function for a two-state kinetic model are provided. The important special case when conformational dynamics are so slow that at most a single transition occurs in a time bin is considered. By making a series of approximations, we eventually recover the likelihood function used in hidden Markov models. In this way, not only is insight gained into the range of validity of this procedure, but also an improved likelihood function can be obtained.

  11. Improving the accuracy of likelihood-based inference in meta-analysis and meta-regression

    OpenAIRE

    Kosmidis, Ioannis; Guolo, Annamaria; Varin, Cristiano

    2015-01-01

    Random-effects models are frequently used to synthesise information from different studies in meta-analysis. While likelihood-based inference is attractive both in terms of limiting properties and in terms of implementation, its application in random-effects meta-analysis may result in misleading conclusions, especially when the number of studies is small to moderate. The current paper shows how methodology that reduces the asymptotic bias of the maximum likelihood estimator of the variance c...

  12. An I(2) Cointegration Model with Piecewise Linear Trends: Likelihood Analysis and Application

    DEFF Research Database (Denmark)

    Kurita, Takamitsu; Nielsen, Heino Bohn; Rahbæk, Anders

    This paper presents likelihood analysis of the I(2) cointegrated vector autoregression with piecewise linear deterministic terms. Limiting behavior of the maximum likelihood estimators are derived, which is used to further derive the limiting distribution of the likelihood ratio statistic for the...... cointegration ranks, extending the result for I(2) models with a linear trend in Nielsen and Rahbek (2007) and for I(1) models with piecewise linear trends in Johansen, Mosconi, and Nielsen (2000). The provided asymptotic theory extends also the results in Johansen, Juselius, Frydman, and Goldberg (2009) where...

  13. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  14. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  15. Evaluation of likelihood functions for data analysis on Graphics Processing Units

    CERN Document Server

    Jarp, Sverre; Leduc, J; Nowak, A; Pantaleo, F

    2010-01-01

    Data analysis techniques based on likelihood function calculation play a crucial role in many High Energy Physics measurements. Depending on the complexity of the models used in the analyses, with several free parameters, many independent variables, large data samples, and complex functions, the calculation of the likelihood functions can require a long CPU execution time. In the past, the continuous gain in performance for each single CPU core kept pace with the increase on the complexity of the analyses, maintaining reason- able the execution time of the sequential software applications. Nowadays, the performance for single cores is not increasing as in the past, while the complexity of the analyses has grown significantly in the Large Hadron Collider era. In this context a breakthrough is represented by the increase of the number of computational cores per computational node. This allows to speed up the execution of the applications, redesigning them with parallelization paradigms. The likelihood function ...

  16. San Carlos Apache Tribe - Energy Organizational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  17. Challenges and prospects for whole-coremonte Carlo analysis

    International Nuclear Information System (INIS)

    The advantages for using Monte Carlo methods to analyze full-core reactor configurations include essentially exact representation of geometry and physical phenomena that are important for reactor analysis. But this substantial advantage comes at a substantial cost because of the computational burden, both in terms of memory demand and computational time. This paper focuses on the challenges facing full-core Monte Carlo for keff calculations and the prospects for Monte Carlo becoming a routine tool for reactor analysis.

  18. Gray Matter Alterations in Obsessive–Compulsive Disorder: An Anatomic Likelihood Estimation Meta-Analysis

    OpenAIRE

    Rotge, Jean-Yves; Langbour, Nicolas; Guehl, Dominique; Bioulac, Bernard; Jaafari, Nematollah; Allard, Michele; Aouizerate, Bruno; Burbaud, Pierre

    2009-01-01

    Many voxel-based morphometry (VBM) studies have found abnormalities in gray matter density (GMD) in obsessive–compulsive disorder (OCD). Here, we performed a quantitative meta-analysis of VBM studies contrasting OCD patients with healthy controls (HC). A literature search identified 10 articles that included 343 OCD patients and 318 HC. Anatomic likelihood estimation meta-analyses were performed to assess GMD changes in OCD patients relative to HC. GMD was smaller in parieto-frontal cortical ...

  19. Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET

    International Nuclear Information System (INIS)

    Highlights: ► We study model selection and parameter recovery from single-molecule FRET experiments. ► We examine the maximum likelihood-based analysis of two-color photon trajectories. ► The number of observed photons determines the performance of the method. ► For long trajectories, one can extract mean dwell times that are comparable to inter-photon times. -- Abstract: When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Förster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.

  20. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    Science.gov (United States)

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. PMID:26837056

  1. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  2. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    Science.gov (United States)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  3. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  4. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  5. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  6. Maximum Likelihood Analysis of Neutron Beta Decay Observables to Resolve the Limits of the V-A Law

    CERN Document Server

    Gardner, S

    2013-01-01

    We assess the ability of future neutron beta decay measurements of up to O(10^{-4}) precision to falsify the standard model, particularly the V-A law, and to identify the dynamics beyond it. To do this, we employ a maximum likelihood statistical framework which incorporates both experimental and theoretical uncertainties. Using illustrative combined global fits to Monte Carlo pseudodata, we also quantify the importance of experimental measurements of the energy dependence of the angular correlation coefficients as input to such efforts, and we determine the precision to which ill-known "second-class" hadronic matrix elements must be determined in order to exact such tests.

  7. Attractor Neural Network Combined with Likelihood Maximization Algorithm for Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A.; Húsek, Dušan; Polyakov, P.Y.

    Vol. 1. Berlin: Springer, 2012 - (Wang, J.; Yen, G.; Polycarpou, M.), s. 1-10. (Lecture Notes in Computer Science. 7367). ISBN 978-3-642-31345-5. ISSN 0302-9743. [ISNN 2012. International Symposium on Neural Networks /9./. Shenyang (CN), 11.07.2012-14.07.2012] R&D Projects: GA ČR GAP202/10/0262 Grant ostatní: GA MŠk(CZ) ED1.1.00/02.0070 Institutional research plan: CEZ:AV0Z10300504 Keywords : Associative Neural Network * Likelihood Maximization * Boolean Factor Analysis * Binary Matrix factorization * Noise XOR Mixing * Plato Problem * Information Gain * Bars problem * Data Mining * Dimension Reduction * Hebbian Learning * Anti-Hebbian Learning Subject RIV: IN - Informatics, Computer Science

  8. A maximum likelihood analysis of the CoGeNT public dataset

    Science.gov (United States)

    Kelso, Chris

    2016-06-01

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis of no WIMP interactions. This work presents the current status of the analysis.

  9. W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis.

    Science.gov (United States)

    Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang

    2016-07-01

    This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at It is free and open to all users and there is no login requirement. PMID:27084950

  10. EPR spectrum deconvolution and dose assessment of fossil tooth enamel using maximum likelihood common factor analysis

    International Nuclear Information System (INIS)

    In order to determine the components which give rise to the EPR spectrum around g = 2 we have applied Maximum Likelihood Common Factor Analysis (MLCFA) on the EPR spectra of enamel sample 1126 which has previously been analysed by continuous wave and pulsed EPR as well as EPR microscopy. MLCFA yielded agreeing results on three sets of X-band spectra and the following components were identified: an orthorhombic component attributed to CO-2, an axial component CO3-3, as well as four isotropic components, three of which could be attributed to SO-2, a tumbling CO-2 and a central line of a dimethyl radical. The X-band results were confirmed by analysis of Q-band spectra where three additional isotropic lines were found, however, these three components could not be attributed to known radicals. The orthorhombic component was used to establish dose response curves for the assessment of the past radiation dose, DE. The results appear to be more reliable than those based on conventional peak-to-peak EPR intensity measurements or simple Gaussian deconvolution methods

  11. Structural and functional neural adaptations in obstructive sleep apnea: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Tahmasian, Masoud; Rosenzweig, Ivana; Eickhoff, Simon B; Sepehry, Amir A; Laird, Angela R; Fox, Peter T; Morrell, Mary J; Khazaie, Habibolah; Eickhoff, Claudia R

    2016-06-01

    Obstructive sleep apnea (OSA) is a common multisystem chronic disorder. Functional and structural neuroimaging has been widely applied in patients with OSA, but these studies have often yielded diverse results. The present quantitative meta-analysis aims to identify consistent patterns of abnormal activation and grey matter loss in OSA across studies. We used PubMed to retrieve task/resting-state functional magnetic resonance imaging and voxel-based morphometry studies. Stereotactic data were extracted from fifteen studies, and subsequently tested for convergence using activation likelihood estimation. We found convergent evidence for structural atrophy and functional disturbances in the right basolateral amygdala/hippocampus and the right central insula. Functional characterization of these regions using the BrainMap database suggested associated dysfunction of emotional, sensory, and limbic processes. Assessment of task-based co-activation patterns furthermore indicated that the two regions obtained from the meta-analysis are part of a joint network comprising the anterior insula, posterior-medial frontal cortex and thalamus. Taken together, our findings highlight the role of right amygdala, hippocampus and insula in the abnormal emotional and sensory processing in OSA. PMID:27039344

  12. Integrated Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (fNL) in the recent CMB data

    International Nuclear Information System (INIS)

    We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (fNL) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained fNL and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the fNL estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of fNL by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different

  13. Likelihood Analysis of Cosmic Shear on Simulated and VIRMOS-DESCART Data

    CERN Document Server

    Van Waerbeke, L; Pellò, R; Pen, U L; McCracken, H J; Jain, B

    2002-01-01

    We present a maximum likelihood analysis of cosmological parameters from measurements of the aperture mass up to 35 arcmin, using simulated and real cosmic shear data. A four-dimensional parameter space is explored which examines the mean density \\Omega_M, the mass power spectrum normalization \\sigma_8, the shape parameter \\Gamma and the redshift of the sources z_s. Constraints on \\Omega_M and \\sigma_8 (resp. \\Gamma and z_s) are then given by marginalizing over \\Gamma and z_s (resp. \\Omega_M and \\sigma_8). For a flat LCDM cosmologies, using a photometric redshift prior for the sources and \\Gamma \\in [0.1,0.4], we find \\sigma_8=(0.57\\pm0.04) \\Omega_M^{(0.24\\mp 0.18) \\Omega_M-0.49} at the 68% confidence level (the error budget includes statistical noise, full cosmic variance and residual systematic). The estimate of \\Gamma, marginalized over \\Omega_M \\in [0.1,0.4], \\sigma_8 \\in [0.7,1.3] and z_s constrained by photometric redshifts, gives \\Gamma=0.25\\pm 0.13 at 68% confidence. Adopting h=0.7, a flat universe, \\...

  14. Assessing evidentiary value in fire debris analysis by chemometric and likelihood ratio approaches.

    Science.gov (United States)

    Sigman, Michael E; Williams, Mary R

    2016-07-01

    Results are presented from support vector machine (SVM), linear and quadratic discriminant analysis (LDA and QDA) and k-nearest neighbors (kNN) methods of binary classification of fire debris samples as positive or negative for ignitable liquid residue. Training samples were prepared by computationally mixing data from ignitable liquid and substrate pyrolysis databases. Validation was performed on an unseen set of computationally mixed (in silico) data and on fire debris from large-scale research burns. The probabilities of class membership were calculated using an uninformative (equal) prior and a likelihood ratio was calculated from the resulting class membership probabilities. The SVM method demonstrated a high discrimination, low error rate and good calibration for the in silico validation data; however, the performance decreased significantly for the fire debris validation data, as indicated by a significant increase in the error rate and decrease in the calibration. The QDA and kNN methods showed similar performance trends. The LDA method gave poorer discrimination, higher error rates and slightly poorer calibration for the in silico validation data; however the performance did not deteriorate for the fire debris validation data. PMID:27081767

  15. Monte Carlo Radiation Analysis of a Spacecraft Radioisotope Power System

    Science.gov (United States)

    Wallace, M.

    1994-01-01

    A Monte Carlo statistical computer analysis was used to create neutron and photon radiation predictions for the General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS RTG). The GPHS RTG is being used on several NASA planetary missions. Analytical results were validated using measured health physics data.

  16. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Science.gov (United States)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  17. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  18. Altered sensorimotor activation patterns in idiopathic dystonia-an activation likelihood estimation meta-analysis of functional brain imaging studies

    DEFF Research Database (Denmark)

    Løkkegaard, Annemette; Herz, Damian M; Haagensen, Brian N;

    2016-01-01

    . Further, study size was usually small including different types of dystonia. Here we performed an activation likelihood estimation (ALE) meta-analysis of functional neuroimaging studies in patients with primary dystonia to test for convergence of dystonia-related alterations in task-related activity....... Hum Brain Mapp 37:547-557, 2016. © 2015 Wiley Periodicals, Inc....

  19. Anatomical likelihood estimation meta-analysis of grey and white matter anomalies in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Thomas P. DeRamus

    2015-01-01

    Full Text Available Autism spectrum disorders (ASD are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE analysis of 21 voxel-based morphometry (VBM studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals. Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior connections. Our findings of decreased cortical matter in parietal–temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations.

  20. A meta-analysis of neuroimaging studies on divergent thinking using activation likelihood estimation.

    Science.gov (United States)

    Wu, Xin; Yang, Wenjing; Tong, Dandan; Sun, Jiangzhou; Chen, Qunlin; Wei, Dongtao; Zhang, Qinglin; Zhang, Meng; Qiu, Jiang

    2015-07-01

    In this study, an activation likelihood estimation (ALE) meta-analysis was used to conduct a quantitative investigation of neuroimaging studies on divergent thinking. Based on the ALE results, the functional magnetic resonance imaging (fMRI) studies showed that distributed brain regions were more active under divergent thinking tasks (DTTs) than those under control tasks, but a large portion of the brain regions were deactivated. The ALE results indicated that the brain networks of the creative idea generation in DTTs may be composed of the lateral prefrontal cortex, posterior parietal cortex [such as the inferior parietal lobule (BA 40) and precuneus (BA 7)], anterior cingulate cortex (ACC) (BA 32), and several regions in the temporal cortex [such as the left middle temporal gyrus (BA 39), and left fusiform gyrus (BA 37)]. The left dorsolateral prefrontal cortex (BA 46) was related to selecting the loosely and remotely associated concepts and organizing them into creative ideas, whereas the ACC (BA 32) was related to observing and forming distant semantic associations in performing DTTs. The posterior parietal cortex may be involved in the semantic information related to the retrieval and buffering of the formed creative ideas, and several regions in the temporal cortex may be related to the stored long-term memory. In addition, the ALE results of the structural studies showed that divergent thinking was related to the dopaminergic system (e.g., left caudate and claustrum). Based on the ALE results, both fMRI and structural MRI studies could uncover the neural basis of divergent thinking from different aspects (e.g., specific cognitive processing and stable individual difference of cognitive capability). PMID:25891081

  1. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    Science.gov (United States)

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  2. Likelihood of Suicidality at Varying Levels of Depression Severity: A Re-Analysis of NESARC Data

    Science.gov (United States)

    Uebelacker, Lisa A.; Strong, David; Weinstock, Lauren M.; Miller, Ivan W.

    2010-01-01

    Although it is clear that increasing depression severity is associated with more risk for suicidality, less is known about at what levels of depression severity the risk for different suicide symptoms increases. We used item response theory to estimate the likelihood of endorsing suicide symptoms across levels of depression severity in an…

  3. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    Science.gov (United States)

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  4. A Monte Carlo template-based analysis for very high definition imaging atmospheric Cherenkov telescopes as applied to the VERITAS telescope array

    CERN Document Server

    ,

    2015-01-01

    We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.

  5. Likelihood analysis of cosmic shear on simulated and VIRMOS-DESCART data

    Science.gov (United States)

    Van Waerbeke, L.; Mellier, Y.; Pelló, R.; Pen, U.-L.; McCracken, H. J.; Jain, B.

    2002-10-01

    We present a maximum likelihood analysis of cosmological parameters from measurements of the aperture mass up to 35 arcmin using simulated and real cosmic shear data. A four-dimensional parameter space is explored which examines the mean density OmegaM , the mass power spectrum normalisation sigma8, the shape parameter Gamma and the redshift of the sources zs. Constraints on OmegaM and sigma8 (resp. Gamma and zs) are provided by marginalising over Gamma and zs (resp. OmegaM and sigma8 ). For a flat Lambda CDM cosmologies, using a photometric redshift prior for the sources and Gamma in [0.1,0.4], we find sigma8 =(0.57+/-0.04) OmegaM (0.24 -/+ 0.18) OmegaM -0.4} at the 68% confidence level (the error budget includes statistical noise, full cosmic variance and residual systematics). The estimate of Gamma , marginalised over OmegaM in [0.1,0.4], sigma8 in [0.7,1.3] and zs constrained by photometric redshifts, gives Gamma =0.25+/- 0.13 at 68% confidence. Adopting h=0.7, a flat universe, Gamma =0.2 and Omegam =0.3 we find sigma8 =0.98 +/-0.06. Combined with CMB measurements, our results suggest a non-zero cosmological constant and provide tight constraints on OmegaM and sigma8 . Finally, we compare our results to the cluster abundance ones, and discuss the possible discrepancy with the latest determinations of the cluster method. In particular we point out the actual limitations of the mass power spectrum prediction in the non-linear regime, and the importance in improving this. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council of Canada (NRCC), the Institut des Sciences de l'Univers (INSU) of the Centre National de la Recherche Scientifique (CNRS) and the University of Hawaii (UH), and at the European Southern Observatory telescopes Very Large Telescope (VLT) and the New Technology Telescope (NTT).

  6. Recursive Pathways to Marginal Likelihood Estimation with Prior-Sensitivity Analysis

    CERN Document Server

    Cameron, Ewan

    2013-01-01

    We investigate the utility to contemporary Bayesian studies of recursive, Gauss-Seidel-type pathways to marginal likelihood estimation characterized by reverse logistic regression and the density of states. Through a pair of illustrative, numerical examples (including mixture modeling of the well-known 'galaxy dataset') we highlight both the remarkable diversity of bridging schemes amenable to recursive normalization and the notable efficiency of the resulting pseudo-mixture densities for gauging prior-sensitivity in the model selection context. Our key theoretical contributions show the connection between the nested sampling identity and the density of states. Further, we introduce a novel heuristic ('thermodynamic integration via importance sampling') for qualifying the role of the bridging sequence in marginal likelihood estimation. An efficient pseudo-mixture density scheme for harnessing the information content of otherwise discarded draws in ellipse-based nested sampling is also introduced.

  7. Elaboration Likelihood Model and an Analysis of the Contexts of Its Application

    OpenAIRE

    Aslıhan Kıymalıoğlu

    2014-01-01

    Elaboration Likelihood Model (ELM), which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept...

  8. Monte carlo analysis of multicolour LED light engine

    DEFF Research Database (Denmark)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen;

    2015-01-01

    A new Monte Carlo simulation as a tool for analysing colour feedback systems is presented here to analyse the colour uncertainties and achievable stability in a multicolour dynamic LED system. The Monte Carlo analysis presented here is based on an experimental investigation of a multicolour LED...... light engine designed for white tuneable studio lighting. The measured sensitivities to the various factors influencing the colour uncertainty for similar system are incorporated. The method aims to provide uncertainties in the achievable chromaticity coordinates as output over the tuneable range, e.......g. expressed in correlated colour temperature (CCT) and chromaticity distance from Planckian locus (Duv), and colour rendering indices (CRIs) for that dynamic system. Data for the uncertainty in chromaticity is analysed in the u', v' (Uniform Chromaticity Scale Diagram) for light output by comparing the...

  9. Asymptotic analysis of spatial discretizations in implicit Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffery D [Los Alamos National Laboratory

    2009-01-01

    We perform an asymptotic analysis of spatial discretizations in Implicit Monte Carlo (IMC). We consider two asymptotic scalings: one that represents a time step that resolves the mean-free time, and one that corresponds to a fixed, optically large time step. We show that only the latter scaling results in a valid spatial discretization of the proper diffusion equation, and thus we conclude that IMC only yields accurate solutions when using optically large spatial cells if time steps are also optically large. We demonstrate the validity of our analysis with a set of numerical examples.

  10. Asymptotic analysis of spatial discretizations in implicit Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffery D [Los Alamos National Laboratory

    2008-01-01

    We perform an asymptotic analysis of spatial discretizations in Implicit Monte Carlo (IMC). We consider two asymptotic scalings: one that represents a time step that resolves the mean-free time, and one that corresponds to a fixed, optically large time step. We show that only the latter scaling results in a valid spatial discretization of the proper diffusion equation, and thus we conclude that IMC only yields accurate solutions when using optically large spatial cells if time steps are also optically large, We demonstrate the validity of our analysis with a set of numerical examples.

  11. The impact of Monte Carlo simulation. A scientometric analysis of scholarly literature

    International Nuclear Information System (INIS)

    A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications. (author)

  12. Using Maximum Likelihood analysis in HBT interferometry: bin-free treatment of correlated errors

    International Nuclear Information System (INIS)

    We present a new procedure, based on the Maximum Likelihood Method, for fitting the space-time size parameters of the particle production region in ultra-relativistic heavy ion collisions. This procedure offers two significant advantages: 1) it does not require sorting of the correlation data into arbitrary bins in the multidimensional momentum space and 2) it applies all available information on the experimental resolution error matrix separately to each correlated particle multiplet analyzed. These features permit extraction of maximum information from the data. The technique may be particularly important in ultra-relativistic heavy ion collisions, because in this energy domain large source radii and long source lifetimes are expected, and high-multiplicity HBT interferometry with a single collision event is a possibility. ((orig.))

  13. Elaboration Likelihood Model and an Analysis of the Contexts of Its Application

    Directory of Open Access Journals (Sweden)

    Aslıhan Kıymalıoğlu

    2014-12-01

    Full Text Available Elaboration Likelihood Model (ELM, which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept most frequently used in elaboration process was involvement, and that argument quality and endorser credibility were the factors most often employed in measuring their effect on the dependant variables. The review provides valuable insights as it presents a holistic view of the model and the variables used in the model.

  14. Iterative Monte Carlo analysis of spin-dependent parton distributions

    Science.gov (United States)

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  15. Iterative Monte Carlo analysis of spin-dependent parton distributions

    CERN Document Server

    Sato, Nobuo; Kuhn, S E; Ethier, J J; Accardi, A

    2016-01-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at $x \\gtrsim 0.1$. The study also provides the first determination of the flavor-separated twist-3 PDFs and the $d_2$ moment of the nucleon within a global PDF analysis.

  16. COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.; Næss, S. K.; Seljebotn, D. S. [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo (Norway); Górski, K. M.; Huey, G.; Jewell, J. B.; Rocha, G.; Wehus, I. K., E-mail: eirik.gjerlow@astro.uio.no [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States)

    2013-11-10

    We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expression that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.

  17. Monte Carlo uncertainty analysis for an iron shielding benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U.; Tsige-Tamirat, H. [Association Euratom-FZK Forschungszentrum Karlsruhe (Germany); Perel, R.L. [Hebrew Univ., Jerusalem (Israel); Wu, Y. [Institute of Plasma Physics, Heifi (China)

    1998-07-01

    This work is devoted to the computational uncertainty analysis of an iron benchmark experiment having been performed previously at the Technical University of Dresden (TUD). The analysis is based on the use of a novel Monte Carlo approach for calculating sensitivities of point detectors and focuses on the new {sup 56}Fe evaluation of the European Fusion File EFF-3. The calculated uncertainties of the neutron leakage fluxes are shown to be significantly smaller than with previous data. Above 5 MeV the calculated uncertainties are larger than the experimental ones. As the measured neutron leakage fluxes are underestimated by about 10 - 20 % in that energy range, it is concluded that the {sup 56}Fe cross-section data have to be further improved. (authors)

  18. Extended Maximum Likelihood Halo-independent Analysis of Dark Matter Direct Detection Data

    CERN Document Server

    Gelmini, Graciela B; Gondolo, Paolo; Huh, Ji-Haeng

    2015-01-01

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio $f_n/f_p=-0.7$, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with $f_n/f_p=-0.8$.

  19. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    International Nuclear Information System (INIS)

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome

  20. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    Energy Technology Data Exchange (ETDEWEB)

    Gelmini, Graciela B.; Georgescu, Andreea [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA, 90095 (United States); Gondolo, Paolo [Department of Physics and Astronomy, University of Utah,115 South 1400 East #201, Salt Lake City, UT, 84112 (United States); Huh, Ji-Haeng [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA, 90095 (United States)

    2015-11-24

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio f{sub n}/f{sub p}=−0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f{sub n}/f{sub p}=−0.8.

  1. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    International Nuclear Information System (INIS)

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio fn/fp=−0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with fn/fp=−0.8

  2. Review of neutron noise analysis theory by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Some debates on the theory of neutron noise analysis for reactor kinetic parameter measurement were found before 1970 but a report firmly clearing these debates has not been found, and a question was raised when neutron noise experiments for the TRIGA and HANARO reactors in Korea were performed. In order to clarify this question, the neutron noise experiment is simulated by the Monte Carlo method. This simulation confirms that the widely used equation is approximately valid and that the confusion was caused from the explanation on the derivation of the equation. Rossi-α technique is one of the representative methods of noise analyses for the reactor kinetic parameter measurement, but different opinions were raised for the chain reaction related term in the equation. The equation originally derived at the Los Alamos National Laboratory (LANL) has been widely accepted. However, the others were supported by strict mathematics and experiments as well, and the reason of discrepancy has not been clarified. Since it is the problem of basic concept before the effect of neutron energy or geometry is included, the Monte Carlo simulation for the simplest reactor model could clarify it. For this purpose, the experiment measuring the neutron noise is simulated, and it results that the original equation is approximately valid. However, it is judged that the explanation on the equation by the authors derived it for the first time is not so correct, but Orndoff who made the first experiment by the Ross-α technique explained it rather correctly

  3. Monte Carlo analysis of Musashi TRIGA mark II reactor core

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, Tetsuo [Atomic Energy Research Laboratory, Musashi Institute of Technology, Kawasaki, Kanagawa (Japan)

    1999-08-01

    The analysis of the TRIGA-II core at the Musashi Institute of Technology Research Reactor (Musashi reactor, 100 kW) was performed by the three-dimensional continuous-energy Monte Carlo code (MCNP4A). Effective multiplication factors (k{sub eff}) for the several fuel-loading patterns including the initial core criticality experiment, the fuel element and control rod reactivity worth as well as the neutron flux measurements were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated k{sub eff} overestimated the experimental data by about 1.0%{delta}k/k for both the initial core and the several fuel-loading arrangements. The calculated reactivity worths of control rod and fuel element agree well the measured ones within the uncertainties. The comparison of neutron flux distribution was consistent with the experimental ones which were measured by activation methods at the sample irradiation tubes. All in all, the agreement between the MCNP predictions and the experimentally determined values is good, which indicated that the Monte Carlo model is enough to simulate the Musashi TRIGA-II reactor core. (author)

  4. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis

    International Nuclear Information System (INIS)

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results

  5. Functional magnetic resonance imaging during emotion recognition in social anxiety disorder: an activation likelihood meta-analysis

    Directory of Open Access Journals (Sweden)

    Coenraad J Hattingh

    2013-01-01

    Full Text Available Background:Social anxiety disorder (SAD is characterised by abnormal fear and anxiety in social situations. Functional magnetic resonance imaging (fMRI is a brain imaging technique that can be used to illustrate neural activation to emotionally salient stimuli. However, no attempt has yet been made to statistically collate fMRI studies of brain activation, using the activation likelihood-estimate technique, in response to emotion recognition tasks in individuals with social anxiety disorder. Methods:A systematic search of fMRI studies of neural responses to socially emotive cues in SAD and GSP was undertaken. Activation likelihood-estimate (ALE meta-analysis, a voxel based meta-analytic technique, was used to estimate the most significant activations during emotional recognition. Results: 7 studies were eligible for inclusion in the meta-analysis, constituting a total of 91 subjects with SAD or GSP, and 93 healthy controls. The most significant areas of activation during emotional recognition versus neutral stimuli in individuals with social anxiety disorder compared to controls were: bilateral amygdala, left medial temporal lobe encompassing the entorhinal cortex, left medial aspect of the inferior temporal lobe encompassing perirhinal cortex and parahippocampus, right anterior cingulate, right globus pallidus, and distal tip of right postcentral gyrus.Conclusion:The results are consistent with neuroanatomic models of the role of the amygdala in fear conditioning, and the importance of the limbic circuitry in mediating anxiety symptoms.

  6. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-10-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  7. Neutronic analysis of the PULSTAR reactor using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Neutronic analysis of the PULSTAR nuclear reactor was performed in support of its utilization and power upgrade from 1-MWth to 2-MWth. The PULSTAR is an open pool research reactor that is currently fueled with UO2 enriched to 4% in U-235. Detailed models were constructed of its core using the MCNP6 Monte Carlo code and its standard nuclear data libraries. The models covered all eight variations of the core starting with the first critical core in 1972 to the current core that was configured in 2011. Three dimensional heterogeneous models were constructed that faithfully reflected the geometry of the core and its surroundings using the original as-built engineering drawings. The Monte Carlo simulations benefited extensively from measurements that were performed upon the loading of each core and its subsequent operation. This includes power distribution and peaking measurements, depletion measurements (reflecting a core's excess reactivity), and measurements of reactivity feedback coefficients. Furthermore, to support the PULSTAR's fuel needs, the simulations explored the utilization of locally existing inventory of fresh UO2 fuel that is enriched to 6% in U-235. The analysis shows reasonable agreement between the results of the MCNP6 simulations and the available measured data. In general, most discrepancies between simulations and measurements may be attributed to the limited knowledge of the exact conditions of the historical measurements and the procedures used to analyze the measured data. Nonetheless, the results indicate the ability of the constructed models to support safety analysis and licensing action in relation to the on-going upgrades of the PULSTAR reactor. (author)

  8. Simulations with the Hybrid Monte Carlo algorithm: implementation and data analysis

    CERN Document Server

    Schaefer, Stefan

    2011-01-01

    This tutorial gives a practical introduction to the Hybrid Monte Carlo algorithm and the analysis of Monte Carlo data. The method is exemplified at the ϕ 4 theory, for which all steps from the derivation of the relevant formulae to the actual implementation in a computer program are discussed in detail. It concludes with the analysis of Monte Carlo data, in particular their auto-correlations.

  9. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    KAUST Repository

    Hoel, Hakon

    2014-01-01

    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  10. Likelihood analysis of the chalcone synthase genes suggests the role of positive selection in morning glories (Ipomoea).

    Science.gov (United States)

    Yang, Ji; Gu, Hongya; Yang, Ziheng

    2004-01-01

    Chalcone synthase (CHS) is a key enzyme in the biosynthesis of flavonoides, which are important for the pigmentation of flowers and act as attractants to pollinators. Genes encoding CHS constitute a multigene family in which the copy number varies among plant species and functional divergence appears to have occurred repeatedly. In morning glories (Ipomoea), five functional CHS genes (A-E) have been described. Phylogenetic analysis of the Ipomoea CHS gene family revealed that CHS A, B, and C experienced accelerated rates of amino acid substitution relative to CHS D and E. To examine whether the CHS genes of the morning glories underwent adaptive evolution, maximum-likelihood models of codon substitution were used to analyze the functional sequences in the Ipomoea CHS gene family. These models used the nonsynonymous/synonymous rate ratio (omega = d(N)/ d(S)) as an indicator of selective pressure and allowed the ratio to vary among lineages or sites. Likelihood ratio test suggested significant variation in selection pressure among amino acid sites, with a small proportion of them detected to be under positive selection along the branches ancestral to CHS A, B, and C. Positive Darwinian selection appears to have promoted the divergence of subfamily ABC and subfamily DE and is at least partially responsible for a rate increase following gene duplication. PMID:14743314

  11. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  12. Reactor physics analysis method based on Monte Carlo homogenization

    International Nuclear Information System (INIS)

    Background: Many new concepts of nuclear energy systems with complicated geometric structures and diverse energy spectra have been put forward to meet the future demand of nuclear energy market. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multi-group cross section libraries. The Monte Carlo (MC) method predominates the suitability of geometry and spectrum, but faces the problems of long computation time and slow convergence. Purpose: This work aims to find a novel scheme to take the advantages of both methods drawn from the deterministic core analysis method and MC method. Methods: A new two-step core analysis scheme is proposed to combine the geometry modeling capability and continuous energy cross section libraries of MC method, as well as the higher computational efficiency of deterministic method. First of all, the MC simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Then, the core diffusion calculations can be done with these multi-group cross sections. Results: The new scheme can achieve high efficiency while maintain acceptable precision. Conclusion: The new scheme can be used as an effective tool for the design and analysis of innovative nuclear energy systems, which has been verified by numeric tests. (authors)

  13. Variations on KamLAND: likelihood analysis and frequentist confidence regions

    OpenAIRE

    Schwetz, Thomas

    2003-01-01

    In this letter the robustness of the first results from the KamLAND reactor neutrino experiment with respect to variations in the statistical analysis is considered. It is shown that an event-by-event based likelhood analysis provides a more powerful tool to extract information from the currently available data sample than a least-squares method based on energy binned data. Furthermore, a frequentist analysis of KamLAND data is performed. Confidence regions with correct coverage in the plane ...

  14. On Monte Carlo Simulation and Analysis of Electricity Markets

    International Nuclear Information System (INIS)

    This dissertation is about how Monte Carlo simulation can be used to analyse electricity markets. There are a wide range of applications for simulation; for example, players in the electricity market can use simulation to decide whether or not an investment can be expected to be profitable, and authorities can by means of simulation find out which consequences a certain market design can be expected to have on electricity prices, environmental impact, etc. In the first part of the dissertation, the focus is which electricity market models are suitable for Monte Carlo simulation. The starting point is a definition of an ideal electricity market. Such an electricity market is partly practical from a mathematical point of view (it is simple to formulate and does not require too complex calculations) and partly it is a representation of the best possible resource utilisation. The definition of the ideal electricity market is followed by analysis how the reality differs from the ideal model, what consequences the differences have on the rules of the electricity market and the strategies of the players, as well as how non-ideal properties can be included in a mathematical model. Particularly, questions about environmental impact, forecast uncertainty and grid costs are studied. The second part of the dissertation treats the Monte Carlo technique itself. To reduce the number of samples necessary to obtain accurate results, variance reduction techniques can be used. Here, six different variance reduction techniques are studied and possible applications are pointed out. The conclusions of these studies are turned into a method for efficient simulation of basic electricity markets. The method is applied to some test systems and the results show that the chosen variance reduction techniques can produce equal or better results using 99% fewer samples compared to when the same system is simulated without any variance reduction technique. More complex electricity market models

  15. Stimulus Complexity and Categorical Effects in Human Auditory Cortex: An Activation Likelihood Estimation Meta-Analysis

    OpenAIRE

    Samson, Fabienne; Zeffiro, Thomas A.; Toussaint, Alain; Belin, Pascal

    2011-01-01

    Investigations of the functional organization of human auditory cortex typically examine responses to different sound categories. An alternative approach is to characterize sounds with respect to their amount of variation in the time and frequency domains (i.e., spectral and temporal complexity). Although the vast majority of published studies examine contrasts between discrete sound categories, an alternative complexity-based taxonomy can be evaluated through meta-analysis. In a quantitative...

  16. Event-related fMRI studies of false memory: An Activation Likelihood Estimation meta-analysis.

    Science.gov (United States)

    Kurkela, Kyle A; Dennis, Nancy A

    2016-01-29

    Over the last two decades, a wealth of research in the domain of episodic memory has focused on understanding the neural correlates mediating false memories, or memories for events that never happened. While several recent qualitative reviews have attempted to synthesize this literature, methodological differences amongst the empirical studies and a focus on only a sub-set of the findings has limited broader conclusions regarding the neural mechanisms underlying false memories. The current study performed a voxel-wise quantitative meta-analysis using activation likelihood estimation to investigate commonalities within the functional magnetic resonance imaging (fMRI) literature studying false memory. The results were broken down by memory phase (encoding, retrieval), as well as sub-analyses looking at differences in baseline (hit, correct rejection), memoranda (verbal, semantic), and experimental paradigm (e.g., semantic relatedness and perceptual relatedness) within retrieval. Concordance maps identified significant overlap across studies for each analysis. Several regions were identified in the general false retrieval analysis as well as multiple sub-analyses, indicating their ubiquitous, yet critical role in false retrieval (medial superior frontal gyrus, left precentral gyrus, left inferior parietal cortex). Additionally, several regions showed baseline- and paradigm-specific effects (hit/perceptual relatedness: inferior and middle occipital gyrus; CRs: bilateral inferior parietal cortex, precuneus, left caudate). With respect to encoding, analyses showed common activity in the left middle temporal gyrus and anterior cingulate cortex. No analysis identified a common cluster of activation in the medial temporal lobe. PMID:26683385

  17. Status of vectorized Monte Carlo for particle transport analysis

    International Nuclear Information System (INIS)

    The conventional particle transport Monte Carlo algorithm is ill suited for modern vector supercomputers because the random nature of the particle transport process in the history based algorithm inhibits construction of vectors. An alternative, event-based algorithm is suitable for vectorization and has been used recently to achieve impressive gains in performance on vector supercomputers. This review describes the event-based algorithm and several variations of it. Implementations of this algorithm for applications in particle transport are described, and their relative merits are discussed. The implementation of Monte Carlo methods on multiple vector parallel processors is considered, as is the potential of massively parallel processors for Monte Carlo particle transport simulations

  18. Simulations of Baryon Acoustic Oscillations III: Likelihood analysis of the matter power spectrum

    CERN Document Server

    Takahashi, Ryuichi; Takada, Masahiro; Matsubara, Takahiko; Sugiyama, Naoshi; Kayo, Issha; Nishimichi, Takahiro; Saito, Shun; Taruya, Atsushi

    2009-01-01

    We study the sample variance of the matter power spectrum for the standard Lambda Cold Dark Matter universe. We use a total of 5000 cosmological N-body cosmological simulations to study in detail the distribution of the best-fit cosmological parameters and the baryon acoustic peak positions. The obtained distribution is compared with the results from the Fisher matrix analysis with and without including non-Gaussian errors. For the Fisher matrix analysis, we compute the derivatives of the matter power spectrum with respect to cosmological parameters using directly full nonlinear simulations. We show that the non-Gaussian errors increase the unmarginalized errors by up to a factor 5 for k_{max}=0.4h/Mpc if there is only one free parameter provided other parameters are well determined by external information. On the other hand, for multi-parameter fitting, the impact of the non-Gaussian errors is significantly mitigated due to severe parameter degeneracies in the power spectrum. The distribution of the acoustic...

  19. Monte Carlo Alpha Iteration Algorithm for a Subcritical System Analysis

    Directory of Open Access Journals (Sweden)

    Hyung Jin Shim

    2015-01-01

    Full Text Available The α-k iteration method which searches the fundamental mode alpha-eigenvalue via iterative updates of the fission source distribution has been successfully used for the Monte Carlo (MC alpha-static calculations of supercritical systems. However, the α-k iteration method for the deep subcritical system analysis suffers from a gigantic number of neutron generations or a huge neutron weight, which leads to an abnormal termination of the MC calculations. In order to stably estimate the prompt neutron decay constant (α of prompt subcritical systems regardless of subcriticality, we propose a new MC alpha-static calculation method named as the α iteration algorithm. The new method is derived by directly applying the power method for the α-mode eigenvalue equation and its calculation stability is achieved by controlling the number of time source neutrons which are generated in proportion to α divided by neutron speed in MC neutron transport simulations. The effectiveness of the α iteration algorithm is demonstrated for two-group homogeneous problems with varying the subcriticality by comparisons with analytic solutions. The applicability of the proposed method is evaluated for an experimental benchmark of the thorium-loaded accelerator-driven system.

  20. Monte Carlo simulation for moment-independent sensitivity analysis

    International Nuclear Information System (INIS)

    The moment-independent sensitivity analysis (SA) is one of the most popular SA techniques. It aims at measuring the contribution of input variable(s) to the probability density function (PDF) of model output. However, compared with the variance-based one, robust and efficient methods are less available for computing the moment-independent SA indices (also called delta indices). In this paper, the Monte Carlo simulation (MCS) methods for moment-independent SA are investigated. A double-loop MCS method, which has the advantages of high accuracy and easy programming, is firstly developed. Then, to reduce the computational cost, a single-loop MCS method is proposed. The later method has several advantages. First, only a set of samples is needed for computing all the indices, thus it can overcome the problem of “curse of dimensionality”. Second, it is suitable for problems with dependent inputs. Third, it is purely based on model output evaluation and density estimation, thus can be used for model with high order (>2) interactions. At last, several numerical examples are introduced to demonstrate the advantages of the proposed methods.

  1. Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Chase, Henry W; Kumar, Poornima; Eickhoff, Simon B; Dombrovski, Alexandre Y

    2015-06-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments-prediction error-is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies have suggested that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that had employed algorithmic reinforcement learning models across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, whereas instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667

  2. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    Science.gov (United States)

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented γ-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum

  3. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for VHTR Analysis

    International Nuclear Information System (INIS)

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  4. Monte Carlo analysis of radiative transport in oceanographic lidar measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale

    2001-07-01

    The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is

  5. Analytic Methods for Cosmological Likelihoods

    OpenAIRE

    Taylor, A. N.; Kitching, T. D.

    2010-01-01

    We present general, analytic methods for Cosmological likelihood analysis and solve the "many-parameters" problem in Cosmology. Maxima are found by Newton's Method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic...

  6. A Markov chain Monte Carlo analysis of the CMSSM

    International Nuclear Information System (INIS)

    We perform a comprehensive exploration of the Constrained MSSM parameter space employing a Markov Chain Monte Carlo technique and a Bayesian analysis. We compute superpartner masses and other collider observables, as well as a cold dark matter abundance, and compare them with experimental data. We include uncertainties arising from theoretical approximations as well as from residual experimental errors of relevant Standard Model parameters. We delineate probability distributions of the CMSSM parameters, the collider and cosmological observables as well as a dark matter direct detection cross section. The 68% probability intervals of the CMSSM parameters are: 0.52TeV 1/2 0 0 g-tilde q-tildeR χ1± -9 s→μ+μ-) -8, 1.9 x 10-10 μSUSY -10 and 1 x 10-10 pb SIp -8 pb for direct WIMP detection. We highlight a complementarity between LHC and WIMP dark matter searches in exploring the CMSSM parameter space. We further expose a number of correlations among the observables, in particular between BR(Bs→μ+μ-) and BR(B-bar →Xsγ) or σSIp. Once SUSY is discovered, this and other correlations may prove helpful in distinguishing the CMSSM from other supersymmetric models. We investigate the robustness of our results in terms of the assumed ranges of CMSSM parameters and the effect of the (g-2)μ anomaly which shows some tension with the other observables. We find that the results for m0, and the observables which strongly depend on it, are sensitive to our assumptions, while our conclusions for the other variables are robust

  7. A Multivariate Time Series Method for Monte Carlo Reactor Analysis

    International Nuclear Information System (INIS)

    A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor

  8. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  9. Further experience in Bayesian analysis using Monte Carlo Integration

    OpenAIRE

    Dijk, Herman; Kloek, Teun

    1980-01-01

    textabstractAn earlier paper [Kloek and Van Dijk (1978)] is extended in three ways. First, Monte Carlo integration is performed in a nine-dimensional parameter space of Klein's model I [Klein (1950)]. Second, Monte Carlo is used as a tool for the elicitation of a uniform prior on a finite region by making use of several types of prior information. Third, special attention is given to procedures for the construction of importance functions which make use of nonlinear optimization methods. *1 T...

  10. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  11. Is there a critical lesion site for unilateral spatial neglect? A meta-analysis using activation likelihood estimation.

    Directory of Open Access Journals (Sweden)

    Pascal eMolenberghs

    2012-04-01

    Full Text Available The critical lesion site responsible for the syndrome of unilateral spatial neglect has been debated for more than a decade. Here we performed an activation likelihood estimation (ALE to provide for the first time an objective quantitative index of the consistency of lesion sites across anatomical group studies of spatial neglect. The analysis revealed several distinct regions in which damage has consistently been associated with spatial neglect symptoms. Lesioned clusters were located in several cortical and subcortical regions of the right hemisphere, including the middle and superior temporal gyrus, inferior parietal lobule, intraparietal sulcus, precuneus, middle occipital gyrus, caudate nucleus and posterior insula, as well as in the white matter pathway corresponding to the posterior part of the superior longitudinal fasciculus. Further analyses suggested that separate lesion sites are associated with impairments in different behavioural tests, such as line bisection and target cancellation. Similarly, specific subcomponents of the heterogeneous neglect syndrome, such as extinction and allocentric and personal neglect, are associated with distinct lesion sites. Future progress in delineating the neuropathological correlates of spatial neglect will depend upon the development of more refined measures of perceptual and cognitive functions than those currently available in the clinical setting.

  12. MONTE CARLO SIMULATION APPLIED TO ECONOMIC AND FINANCIAL ANALYSIS OF AN AGRIBUSINESS PROJECT

    OpenAIRE

    Danilo Simões; Lucas Raul Scherrer

    2014-01-01

    In practice, all management decisions involving an organization, regardless of size, have uncertainties which lead to different levels of risk. Monte Carlo simulation allows risk analysis by designing probabilistic models. From a deterministic model of economic viability indicators, commonly used for decision investment projects, it was developed a probabilistic model with Monte Carlo method simulations in order to carry out economic and financial analysis of an agroindustrial ...

  13. Analytical band Monte Carlo analysis of electron transport in silicene

    Science.gov (United States)

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V‑1 s‑1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ∼100 cm2 V‑1 s‑1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  14. Monte-Carlo application for nondestructive nuclear waste analysis

    International Nuclear Information System (INIS)

    The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Juelich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter in order to obtain the response of detection systems

  15. Rising Above Chaotic Likelihoods

    CERN Document Server

    Du, Hailiang

    2014-01-01

    Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...

  16. A maximum likelihood QTL analysis reveals common genome regions controlling resistance to Salmonella colonization and carrier-state

    Directory of Open Access Journals (Sweden)

    Thanh-Son Tran

    2012-05-01

    Full Text Available Abstract Background The serovars Enteritidis and Typhimurium of the Gram-negative bacterium Salmonella enterica are significant causes of human food poisoning. Fowl carrying these bacteria often show no clinical disease, with detection only established post-mortem. Increased resistance to the carrier state in commercial poultry could be a way to improve food safety by reducing the spread of these bacteria in poultry flocks. Previous studies identified QTLs for both resistance to carrier state and resistance to Salmonella colonization in the same White Leghorn inbred lines. Until now, none of the QTLs identified was common to the two types of resistance. All these analyses were performed using the F2 inbred or backcross option of the QTLExpress software based on linear regression. In the present study, QTL analysis was achieved using Maximum Likelihood with QTLMap software, in order to test the effect of the QTL analysis method on QTL detection. We analyzed the same phenotypic and genotypic data as those used in previous studies, which were collected on 378 animals genotyped with 480 genome-wide SNP markers. To enrich these data, we added eleven SNP markers located within QTLs controlling resistance to colonization and we looked for potential candidate genes co-localizing with QTLs. Results In our case the QTL analysis method had an important impact on QTL detection. We were able to identify new genomic regions controlling resistance to carrier-state, in particular by testing the existence of two segregating QTLs. But some of the previously identified QTLs were not confirmed. Interestingly, two QTLs were detected on chromosomes 2 and 3, close to the locations of the major QTLs controlling resistance to colonization and to candidate genes involved in the immune response identified in other, independent studies. Conclusions Due to the lack of stability of the QTLs detected, we suggest that interesting regions for further studies are those that were

  17. Development of an analysis software for comparison between proton treatment planning system and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Currently, many proton therapy facilities are used for radiotherapy for treating cancer. The main advantage of proton therapy is the absence of exit dose, which offers a highly conformal dose to treatment target as well as better normal organ sparing. The most of treatment planning system (TPS) in proton therapy calculates dose distribution using a pencil beam algorithm (PBA). PBA is suitable for clinical proton therapy because of the fast computation time. However PBA shows accuracy limitations mainly because of the one-dimensional density scaling of proton pencil beams in water. Recently, we developed Monte Carlo simulation tools for the design of proton therapy facility at National Cancer Center (NCC) using GEANT4 toolkit (version GEANT4.9.2p02). Monte Carlo simulation is expected to reproduce precise influences of complex geometry and material varieties which are difficult to introduce to the PBA. The data format of Monte Carlo simulation result has different from DICOM-RT. Consequently we need we analysis software for comparing between TPS and Monte Carlo simulation. The main objective of this research is to develop an analysis toolkit for verifying precision and accuracy of the proton treatment planning system and to analyze dose calculating algorithm of the proton therapy using Monte Carlo simulation. In this work, we conclude that we developed an analysis software for GEANT4-based medical application. This toolkit is capable of evaluating the accuracy of calculated dose by TPS with Monte Carlo simulation.

  18. Development of an analysis software for comparison between proton treatment planning system and Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dae Hyun; Suh, Tae Suk [Dept. of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Park, Sey Joon; Yoo, Seung Hoon; Lee, Se Byeong [Proton Therapy Center, National Cancer Center, Goyang (Korea, Republic of); Shin, Jung Wook [Dept. of Radiation Oncology, University of California, SanFrancisco (United States)

    2011-11-15

    Currently, many proton therapy facilities are used for radiotherapy for treating cancer. The main advantage of proton therapy is the absence of exit dose, which offers a highly conformal dose to treatment target as well as better normal organ sparing. The most of treatment planning system (TPS) in proton therapy calculates dose distribution using a pencil beam algorithm (PBA). PBA is suitable for clinical proton therapy because of the fast computation time. However PBA shows accuracy limitations mainly because of the one-dimensional density scaling of proton pencil beams in water. Recently, we developed Monte Carlo simulation tools for the design of proton therapy facility at National Cancer Center (NCC) using GEANT4 toolkit (version GEANT4.9.2p02). Monte Carlo simulation is expected to reproduce precise influences of complex geometry and material varieties which are difficult to introduce to the PBA. The data format of Monte Carlo simulation result has different from DICOM-RT. Consequently we need we analysis software for comparing between TPS and Monte Carlo simulation. The main objective of this research is to develop an analysis toolkit for verifying precision and accuracy of the proton treatment planning system and to analyze dose calculating algorithm of the proton therapy using Monte Carlo simulation. In this work, we conclude that we developed an analysis software for GEANT4-based medical application. This toolkit is capable of evaluating the accuracy of calculated dose by TPS with Monte Carlo simulation.

  19. The present of shielding analysis with nuclear data for continuous energy Monte Carlo code MCNP

    International Nuclear Information System (INIS)

    Following three problems are analyzed by continuous energy Monte Carlo code MCNP with JENDL-3.2, 3.3, and ENDF/B-VI. 1. Shielding analysis of WINFRITH-Aspins iron deep penetration experiment. 2. Shielding analysis of TN-12A spent fuel transport cask experiment. 3. Shielding analysis of modular shielding house keeping spent fuel transportable casks. (author)

  20. Finite-Time Analysis of Stratified Sampling for Monte Carlo

    OpenAIRE

    Carpentier, Alexandra; Munos, Rémi

    2011-01-01

    International audience We consider the problem of stratified sampling for Monte-Carlo integration. We model this problem in a multi-armed bandit setting, where the arms represent the strata, and the goal is to estimate a weighted average of the mean values of the arms. We propose a strategy that samples the arms according to an upper bound on their standard deviations and compare its estimation quality to an ideal allocation that would know the standard deviations of the strata. We provide...

  1. A Maximum Likelihood Approach to Correlational Outlier Identification.

    Science.gov (United States)

    Bacon, Donald R.

    1995-01-01

    A maximum likelihood approach to correlational outlier identification is introduced and compared to the Mahalanobis D squared and Comrey D statistics through Monte Carlo simulation. Identification performance depends on the nature of correlational outliers and the measure used, but the maximum likelihood approach is the most robust performance…

  2. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    Science.gov (United States)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  3. Shape analysis of blocking dips: Monte Carlo vs. analytical results

    International Nuclear Information System (INIS)

    Angular blocking dips around the axis in Al single crystal of α-particles of about 2 MeV produced at a depth of 0.2 μm are calculated for several values of the mean transverse displacement v perpendicular to tau of the decaying nucleus within the range 0 <= v perpendicular to tau <= 260 pm. Calculations have been made both by an extensive multistring Monte Carlo simulation and by a continuum model with diffusion. As far as the Monte Carlo method is concerned, the influence of the (small) solid angle of particles emission and of the 'single interaction' approximation has been investigated. The analytical calculations performed on the basis of a Moliere (thermally averaged) multistring potential show, for large v perpendicular to tau, a clear dependence of the blocking dips on the recoil direction and a sharp peak at very small angles. The shapes of the dips obtained by the two methods are in overall good agreement while a very satisfactory comparison has been found for the dip widths and the relative parameters used in many lifetime measurements. (author)

  4. Optimization of scintillation-detector timing systems using Monte Carlo analysis

    International Nuclear Information System (INIS)

    Monte Carlo analysis is used to model statistical noise associated with scintillation-detector photoelectron emissions and photomultiplier tube operation. Additionally, the impulse response of a photomultiplier tube, front-end amplifier, and constant-fraction discriminator (CFD) is modeled so the effects of front-end bandwidth and constant-fraction delay and fraction can be evaluated for timing-system optimizations. Such timing-system analysis is useful for detectors having low photo-electron-emission rates, including Bismuth Germanate (BGO) scintillation detectors used in Positron Emission Tomography (PET) systems. Monte Carlo timing resolution for a BGO / photomultiplier scintillation detector, CFD timing system is presented as a function of constant-fraction delay for 511-keV coincident gamma rays in the presence of Compton scatter. Monte Carlo results are in good agreement with measured results when a tri-exponential BGO scintillation model is used. Monte Carlo simulation is extended to include CFD energy-discrimination performance. Monte Carlo energy-discrimination performance is experimentally verified along with timing performance (Monte Carlo timing resolution of 3.22 ns FWHM versus measured resolution of 3.30 ns FWHM) for a front-end rise time of 10 ns (10--90%), CFD delay of 8 ns, and CFD fraction of 20%

  5. Monte-Carlo based uncertainty analysis: Sampling efficiency and sampling convergence

    International Nuclear Information System (INIS)

    Monte Carlo analysis has become nearly ubiquitous since its introduction, now over 65 years ago. It is an important tool in many assessments of the reliability and robustness of systems, structures or solutions. As the deterministic core simulation can be lengthy, the computational costs of Monte Carlo can be a limiting factor. To reduce that computational expense as much as possible, sampling efficiency and convergence for Monte Carlo are investigated in this paper. The first section shows that non-collapsing space-filling sampling strategies, illustrated here with the maximin and uniform Latin hypercube designs, highly enhance the sampling efficiency, and render a desired level of accuracy of the outcomes attainable with far lesser runs. In the second section it is demonstrated that standard sampling statistics are inapplicable for Latin hypercube strategies. A sample-splitting approach is put forward, which in combination with a replicated Latin hypercube sampling allows assessing the accuracy of Monte Carlo outcomes. The assessment in turn permits halting the Monte Carlo simulation when the desired levels of accuracy are reached. Both measures form fairly noncomplex upgrades of the current state-of-the-art in Monte-Carlo based uncertainty analysis but give a substantial further progress with respect to its applicability.

  6. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at lo...... algorithms can immediately be applied also to other subspace problems such as frequency estimation of sinusoids in noise or direction of arrival estimation with uniform linear arrays....

  7. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    Science.gov (United States)

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  8. Determining the Number of Principal Components to Retain via Parallel Analysis: Alternatives to Monte Carlo Analyses.

    Science.gov (United States)

    Lautenschlager, Gary J.

    The parallel analysis method for determining the number of components to retain in a principal components analysis has received a recent resurgence of support and interest. However, researchers and practitioners desiring to use this criterion have been hampered by the required Monte Carlo analyses needed to develop the criteria. Two recent…

  9. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    Science.gov (United States)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  10. Simplifying Likelihood Ratios

    OpenAIRE

    McGee, Steven

    2002-01-01

    Likelihood ratios are one of the best measures of diagnostic accuracy, although they are seldom used, because interpreting them requires a calculator to convert back and forth between “probability” and “odds” of disease. This article describes a simpler method of interpreting likelihood ratios, one that avoids calculators, nomograms, and conversions to “odds” of disease. Several examples illustrate how the clinician can use this method to refine diagnostic decisions at the bedside.

  11. General purpose dynamic Monte Carlo with continuous energy for transient analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sjenitzer, B. L.; Hoogenboom, J. E. [Delft Univ. of Technology, Dept. of Radiation, Radionuclide and Reactors, Mekelweg 15, 2629JB Delft (Netherlands)

    2012-07-01

    For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)

  12. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  13. Dwarf spheroidal J-factors without priors: A likelihood-based analysis for indirect dark matter searches

    CERN Document Server

    Chiappo, A; Conrad, J; Strigari, L E; Anderson, B; Sanchez-Conde, M A

    2016-01-01

    Line-of-sight integrals of the squared density, commonly called the J-factor, are essential for inferring dark matter annihilation signals. The J-factors of dark matter-dominated dwarf spheroidal satellite galaxies (dSphs) have typically been derived using Bayesian techniques, which for small data samples implies that a choice of priors constitutes a non-negligible systematic uncertainty. Here we report the development of a new fully frequentist approach to construct the profile likelihood of the J-factor. Using stellar kinematic data from several classical and ultra-faint dSphs, we derive the maximum likelihood value for the J-factor and its confidence intervals. We validate this method, in particular its bias and coverage, using simulated data from the Gaia Challenge. We find that the method possesses good statistical properties. The J-factors and their uncertainties are generally in good agreement with the Bayesian-derived values, with the largest deviations restricted to the systems with the smallest kine...

  14. Construction of the quantitative analysis environment using Monte Carlo simulation

    International Nuclear Information System (INIS)

    The thoracic phantom image was acquisitioned of the axial section to construct maps of the source and density with Monte Carlo (MC) simulation. The phantom was Heart/Liver Type HL (Kyoto Kagaku Co., Ltd.) single photon emission CT (SPECT)/CT machine was Symbia T6 (Siemence) with the collimator LMEGP (low-medium energy general purpose). Maps were constructed from CT images with an in-house software using Visual studio C Sharp (Microsoft). The code simulation of imaging nuclear detectors (SIMIND) was used for MC simulation, Prominence processor (Nihon Medi-Physics) for filter processing and image reconstruction, and the environment DELL Precision T7400 for all image processes. For the actual experiment, the phantom was given 15 MBq of 99mTc assuming the uptake 2% at the dose of 740 MBq in its myocardial portion and SPECT image was acquisitioned and reconstructed with Butter-worth filter and filter back projection method. CT images were similarly obtained in 0.3 mm thick slices, which were filed in one formatted with digital imaging and communication in medicine (DICOM), and then processed for application to SIMIND for mapping the source and density. Physical and mensuration factors were examined in ideal images by sequential exclusion and simulation of those factors as attenuation, scattering, spatial resolution deterioration and statistical fluctuation. Gamma energy spectrum, SPECT projection and reconstructed images given by the simulation were found to well agree with the actual data, and the precision of MC simulation was confirmed. Physical and mensuration factors were found to be evaluable individually, suggesting the usefulness of the simulation for assessing the precision of their correction. (T.T.)

  15. Markov Chain Monte Carlo Joint Analysis of Chandra X-Ray Imaging Spectroscopy and Sunyaev-Zel'dovich Effect Data

    Science.gov (United States)

    Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.

    2004-01-01

    X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.

  16. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    SUN ShuGuang; ZHANG XinSheng

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  17. The seasonal KPSS test when neglecting seasonal dummies: a Monte Carlo analysis

    OpenAIRE

    El Montasser, Ghassen; Boufateh, Talel; Issaoui, Fakhri

    2013-01-01

    This paper shows through a Monte Carlo analysis the effect of neglecting seasonal deterministics on the seasonal KPSS test. We found that the test is most of the time heavily oversized and not convergent in this case. In addition, Bartlett-type non-parametric correction of error variances did not signally change the test's rejection frequencies.

  18. Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis

    Science.gov (United States)

    Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.

    2010-01-01

    The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…

  19. Generalization of Markov Monte Carlo reliability analysis to include non-Markovian maintenance strategies

    International Nuclear Information System (INIS)

    The Lagrangian approach to Markov Monte Carlo methods for systems reliability analysis is generalized to include non-Markovian phenomena in which system components are replaced. The method is then employed to analyze the unreliability and unavailability of a number of redundant systems in which maintenance is carried out by batch or time replacement of aging components. (orig.)

  20. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method

    Institute of Scientific and Technical Information of China (English)

    XI Jia-mi; YANG Geng-she

    2008-01-01

    Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.

  1. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    Science.gov (United States)

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives. PMID:25487423

  2. Scientific opinion on a quantitative pathway analysis of the likelihood ofTilletia indica M. introduction into EU with importation of US wheat

    DEFF Research Database (Denmark)

    Baker, R.; Candresse, T.; Dormannsné Simon, E.; Gilioli, G.; Grégoire, J.-C.; Jeger, M. J.; Karadjova, O. E.; Lövei, G.; Makowski, D.; Manceau, C.; Navajas, M.; Porta Puglia, A.; Rafoss, T.; Rossi, V.; Schans, J.; Schrader, G.; Urek, G.; van Lenteren, J. C.; Vloutoglou, I; Winter, S.; Zlotina, M.

    2010-01-01

    The European Commission requested EFSA to provide a scientific opinion on the USDA APHIS quantitative pathway analysis on likelihood of Karnal bunt introduction with importation of US wheat for grain into EU and desert durum wheat for grain into Italy. EFSA was also requested to indicate whether ...... Panel concluded that the US bunted kernel standard does not provide a level of protection equivalent to EU requirements and that such level of protection could only be warranted by measures which include testing at harvest and before shipment to detect T. indica teliospores....

  3. The Radial Extent and Warp of the Ionized Galactic Disk. II. A Likelihood Analysis of Radio-Wave Scattering Toward the Anticenter

    OpenAIRE

    Lazio, T. Joseph W.; Cordes, James M.

    1997-01-01

    We use radio-wave scattering data to constrain the distribution of ionized gas in the outer Galaxy. Like previous models, our model for the H II disk includes parameters for the radial scale length and scale height of the H II, but we allow the H II disk to warp and flare. Our model also includes the Perseus arm. We use a likelihood analysis on 11 extragalactic sources and 7 pulsars. Scattering in the Perseus arm is no more than 60% of the level contributed by spiral arms in the inner Galaxy,...

  4. Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation

    International Nuclear Information System (INIS)

    The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices

  5. The energy analysis for the monte carlo simulations of a diffusive shock

    OpenAIRE

    Wang, Xin; Yan, Yihua

    2011-01-01

    According to the shock jump conditions, the total fluid's mass, momentum, and energy should be conserved in the entire simulation box. We perform the dynamical Monte Carlo simulations with the multiple scattering law for energy analysis. The various energy functions of time are obtained by monitoring the total particles' mass, momentum, and energy in the simulation box. In conclusion, the energy analysis indicates that the smaller energy losses in the prescribed scattering law are, the harder...

  6. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    Science.gov (United States)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  7. Risk Analysis of Tilapia Recirculating Aquaculture Systems: A Monte Carlo Simulation Approach

    OpenAIRE

    Kodra, Bledar

    2007-01-01

    Risk Analysis of Tilapia Recirculating Aquaculture Systems: A Monte Carlo Simulation Approach Bledar Kodra (ABSTRACT) The purpose of this study is to modify an existing static analytical model developed for a Re-circulating Aquaculture Systems through incorporation of risk considerations to evaluate the economic viability of the system. In addition the objective of this analysis is to provide a well documented risk based analytical system so that individuals (investors/lenders) c...

  8. Perturbation analysis for Monte Carlo continuous cross section models

    International Nuclear Information System (INIS)

    Sensitivity analysis, including both its forward and adjoint applications, collectively referred to hereinafter as Perturbation Analysis (PA), is an essential tool to complete Uncertainty Quantification (UQ) and Data Assimilation (DA). PA-assisted UQ and DA have traditionally been carried out for reactor analysis problems using deterministic as opposed to stochastic models for radiation transport. This is because PA requires many model executions to quantify how variations in input data, primarily cross sections, affect variations in model's responses, e.g. detectors readings, flux distribution, multiplication factor, etc. Although stochastic models are often sought for their higher accuracy, their repeated execution is at best computationally expensive and in reality intractable for typical reactor analysis problems involving many input data and output responses. Deterministic methods however achieve computational efficiency needed to carry out the PA analysis by reducing problem dimensionality via various spatial and energy homogenization assumptions. This however introduces modeling error components into the PA results which propagate to the following UQ and DA analyses. The introduced errors are problem specific and therefore are expected to limit the applicability of UQ and DA analyses to reactor systems that satisfy the introduced assumptions. This manuscript introduces a new method to complete PA employing a continuous cross section stochastic model and performed in a computationally efficient manner. If successful, the modeling error components introduced by deterministic methods could be eliminated, thereby allowing for wider applicability of DA and UQ results. Two MCNP models demonstrate the application of the new method - a Critical Pu Sphere (Jezebel), a Pu Fast Metal Array (Russian BR-1). The PA is completed for reaction rate densities, reaction rate ratios, and the multiplication factor. (author)

  9. Photopeak shape function: Formulation based on stochastic event analysis and parameter estimation by the maximum-likelihood estimation method

    International Nuclear Information System (INIS)

    A theoretical model to describe the photopeak shape function has been developed by introducing an instrument function, which is a convolution of the statistical fluctuation of the charge carriers and the stochastic process of escape of the charge-carrier collection by capture at trapping centers. The photopeak shape function is a convolution of the instrument function and a Poisson probability-density functional representation of a reduced random summing event. The functions have been tested by using three coaxial, high-purity Ge detectors of a conventional type. The parameters were estimated by the maximum-likelihood estimation method. The position indicating the incident photon energy appeared at the centroid of the intrinsic normal distribution. The most probable peak-height position is no more than a ''conventional'' one, though it is commonly used in spectroscopy. The theory predicts the photopeak shape of many photons by folding an input function of the subject. The theory provides standards for the detector and the detection system. (orig.)

  10. A Nuclear Ribosomal DNA Phylogeny of Acer Inferred with Maximum Likelihood, Splits Graphs, and Motif Analysis of 606 Sequences

    Directory of Open Access Journals (Sweden)

    Guido W. Grimm

    2006-01-01

    Full Text Available The multi-copy internal transcribed spacer (ITS region of nuclear ribosomal DNA is widely used to infer phylogenetic relationships among closely related taxa. Here we use maximum likelihood (ML and splits graph analyses to extract phylogenetic information from ~ 600 mostly cloned ITS sequences, representing 81 species and subspecies of Acer, and both species of its sister Dipteronia. Additional analyses compared sequence motifs in Acer and several hundred Anacardiaceae, Burseraceae, Meliaceae, Rutaceae, and Sapindaceae ITS sequences in GenBank. We also assessed the effects of using smaller data sets of consensus sequences with ambiguity coding (accounting for within-species variation instead of the full (partly redundant original sequences. Neighbor-nets and bipartition networks were used to visualize conflict among character state patterns. Species clusters observed in the trees and networks largely agree with morphology-based classifications; of de Jong’s (1994 16 sections, nine are supported in neighbor-net and bipartition networks, and ten by sequence motifs and the ML tree; of his 19 series, 14 are supported in networks, motifs, and the ML tree. Most nodes had higher bootstrap support with matrices of 105 or 40 consensus sequences than with the original matrix. Within-taxon ITS divergence did not differ between diploid and polyploid Acer, and there was little evidence of differentiated parental ITS haplotypes, suggesting that concerted evolution in Acer acts rapidly.

  11. Analysis of communication costs for domain decomposed Monte Carlo methods in nuclear reactor analysis

    International Nuclear Information System (INIS)

    A domain decomposed Monte Carlo communication kernel is used to carry out performance tests to establish the feasibility of using Monte Carlo techniques for practical Light Water Reactor (LWR) core analyses. The results of the prototype code are interpreted in the context of simplified performance models which elucidate key scaling regimes of the parallel algorithm.

  12. Performance analysis for neutronics benchmark experiments with partial adjoint contribution estimated by forward Monte Carlo calculation

    International Nuclear Information System (INIS)

    Highlights: • Performance estimation of nuclear-data benchmark was investigated. • Point detector contribution played a benchmark role not only to the neutron producing the detector contribution but also equally to all the upstream transport neutrons. • New functions were defined to give how well the contribution could be interpreted for benchmarking. • Benchmark performance could be evaluated only by a forward Monte Carlo calculation. -- Abstract: The author's group has been investigating how the performance estimation of nuclear-data benchmark using experiment and its analysis by Monte Carlo code should be carried out especially at 14 MeV. We have recently found that a detector contribution played a benchmark role not only to the neutron producing the detector contribution but also equally to all the upstream neutrons during the neutron history. This result would propose that the benchmark performance could be evaluated only by a forward Monte Carlo calculation. In this study, we thus defined new functions to give how well the contribution could be utilized for benchmarking using the point detector, and described that it was deeply related to the newly introduced “partial adjoint contribution”. By preparing these functions before benchmark experiments, one could know beforehand how well and for which nuclear data the experiment results could do benchmarking in forward Monte Carlo calculations

  13. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    Science.gov (United States)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  14. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    Science.gov (United States)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  15. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    Energy Technology Data Exchange (ETDEWEB)

    Fensin, Michael L [Los Alamos National Laboratory; Tobin, Stephen J [Los Alamos National Laboratory; Swinhoe, Martyn T [Los Alamos National Laboratory; Menlove, Howard O [Los Alamos National Laboratory; Sandoval, Nathan P [Los Alamos National Laboratory

    2009-01-01

    assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.

  16. Model Fit after Pairwise Maximum Likelihood.

    Science.gov (United States)

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  17. Modeling Elicitation effects in contingent valuation studies: a Monte Carlo Analysis of the bivariate approach

    OpenAIRE

    Genius, Margarita; Strazzera, Elisabetta

    2005-01-01

    A Monte Carlo analysis is conducted to assess the validity of the bivariate modeling approach for detection and correction of different forms of elicitation effects in Double Bound Contingent Valuation data. Alternative univariate and bivariate models are applied to several simulated data sets, each one characterized by a specific elicitation effect, and their performance is assessed using standard selection criteria. The bivariate models include the standard Bivariate Probit model, and an al...

  18. Risk analysis and Monte Carlo simulation applied to the generation of drilling AFE estimates

    International Nuclear Information System (INIS)

    This paper presents a method for developing an authorization-for-expenditure (AFE)-generating model and illustrates the technique with a specific offshore field development case study. The model combines Monte Carlo simulation and statistical analysis of historical drilling data to generate more accurate, risked, AFE estimates. In addition to the general method, two examples of making AFE time estimates for North Sea wells with the presented techniques are given

  19. Timing resolution of scintillation-detector systems: a Monte Carlo analysis

    OpenAIRE

    Choong, Woon-Seng

    2009-01-01

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use a Monte Carlo analysis to model the physi...

  20. A Monte Carlo computer program for analysis of backscattering and sputtering in practical vacuum systems

    International Nuclear Information System (INIS)

    A Monte Carlo computer program originally developed for analysis of molecular gas flow in axi-symmetric vacuum systems has been extended to include modelling of high energy backscattering and sputtering processes. This report describes the input data required by the computer program together with the results produced. A general description is given of the program operation and the backscattering and sputtering modelling used. An example calculation is included to illustrate practical application of the program. (author)

  1. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  2. Comparing statistical data to Monte Carlo simulation - parameter fitting and unfolding

    International Nuclear Information System (INIS)

    The author presents an introduction to the statistical analysis of experimental data by means of Monte Carlo simulations. After a description of the χ2 test of a hypothesis the least-square and maximum-likelihood fits to Monte Carlo distributions are described. Then unfolding is described. Finally confidence intervals are studied, and the computation of upper and lower limits is discussed from a Bayesian point of view. (HSI)

  3. A vectorized Monte Carlo method with pseudo-scattering for neutron transport analysis

    International Nuclear Information System (INIS)

    A vectorized Monte Carlo method has been developed for the neutron transport analysis on the vector supercomputer HITAC S810. In this method, a multi-particle tracking algorithm is adopted and fundamental processing such as pseudo-random number generation is modified to use the vector processor effectively. The flight analysis of this method is characterized by the new algorithm with pseudo-scattering. This algorithm was verified by comparing its results with those of the conventional one. The method realized a speed-up of factor 10; about 7 times by vectorization and 1.5 times by the new algorithm for flight analysis

  4. Number of iterations needed in Monte Carlo Simulation using reliability analysis for tunnel supports

    Directory of Open Access Journals (Sweden)

    E. Bukaçi

    2016-06-01

    Full Text Available There are many methods in geotechnical engineering which could take advantage of Monte Carlo Simulation to establish probability of failure, since closed form solutions are almost impossible to use in most cases. The problem that arises with using Monte Carlo Simulation is the number of iterations needed for a particular simulation.This article will show why it’s important to calculate number of iterations needed for Monte Carlo Simulation used in reliability analysis for tunnel supports using convergence – confinement method. Number if iterations needed will be calculated with two methods. In the first method, the analyst has to accept a distribution function for the performance function. The other method suggested by this article is to calculate number of iterations based on the convergence of the factor the analyst is interested in the calculation. Reliability analysis will be performed for the diversion tunnel in Rrëshen, Albania, by using both methods mentioned and results will be confronted

  5. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    International Nuclear Information System (INIS)

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically

  6. Continuous energy Monte Carlo analysis of neutron shielding benchmark experiments with cross sections in JENDL-3

    Energy Technology Data Exchange (ETDEWEB)

    Ueki, Kohtaro; Ohashi, Atsuto (Ship Research Inst., Mitaka, Tokyo (Japan)); Kawai, Masayoshi

    1993-04-01

    The iron, carbon and beryllium cross sections in JENDL-3 have been tested by the continuous energy Monte Carlo analysis of the neutron shielding benchmark experiments. The iron cross sections have been tested with analysis of the ORNL and the Winfrith experiments using the fission neutron sources, and also the LLNL iron experiment using the D-T neutron source. The carbon and beryllium cross sections have been tested with the JAERI-FNS TOF experiments using the D-T neutron source. Revision of the subroutine TALLYD and an appropriate weight-window-parameter assignment have been accomplished in the MCNP code. In consequence, the FSD for each energy bin is reduced so small that the Monte Carlo results for neutron energy spectra could be recognized to be reliable. The Monte Carlo calculations with JENDL-3 indicate a good agreement with the benchmark experiments in a wide energy range, as a whole. Particularly, for the Winfrith iron experiment, the results with JENDL-3 give better agreement, just below the iron 24keV window, than that with ENDF/B-IV. For the JAERI-FNS TOF graphite experiment, the calculated angular fluxes with JENDL-3 give closer agreement than that with ENDF/B-IV at several peaks and dips caused by the inelastic scattering. However, distinct underestimation is observed in the calculated energy spectrum with JENDL-3 between 0.8 and 3.0 MeV for the two iron experiments using fission neutron sources. (author).

  7. Present status of Monte Carlo seminar for sub-criticality safety analysis in Japan

    International Nuclear Information System (INIS)

    This paper provides overview of the methods and results of a series of sub-criticality safety analysis seminars for nuclear fuel cycle facility with the Monte Carlo method held in Japan from July 2000 to July 2003. In these seminars, MCNP-4C2 system (MS-DOS version) was installed in note-type personal computers for participants. Fundamental theory of reactor physics and Monte Carlo simulation as well as the contents of the MCNP manual were lectured. Effective neutron multiplication factors and neutron spectra were calculated for some examples such as JCO deposit tank, JNC uranium solution storage tank, JNC plutonium solution storage tank and JAERI TCA core. Management for safety of nuclear fuel cycle facilities was discussed in order to prevent criticality accidents in some of the seminars. (author)

  8. A study on the radioactivity analysis of decommissioning concrete using Monte Carlo simulation

    International Nuclear Information System (INIS)

    In order to decommission the shielding concrete of KRR(Korea Research Reactor)-1 and 2, it must be exactly determined activated level and range by neutron irradiation during operation. To determine the activated level and range, it must be sampled and analyzed the core sample. But, there are difficulties in sample preparation and determination of the measurement efficiency by self-absorption. In the study, the full energy efficiency of the HPGe detector was compared with the measured value using standard source and the calculated one using Monte Carlo simulation. Also, self-absorption effects due to the density and component change of the concrete were calculated using the Monte Carlo method. Its results will be used radioactivity analysis of the real concrete core sample in the future

  9. A study on the radioactivity analysis of decommissioning concrete using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Bum Kyoung; Kim, Gye Hong; Chung, Un Soo; Lee, Keun Woo; Oh, Won Zin; Park, Jin Ho [KAERI, Taejon (Korea, Republic of)

    2004-07-01

    In order to decommission the shielding concrete of KRR(Korea Research Reactor)-1 and 2, it must be exactly determined activated level and range by neutron irradiation during operation. To determine the activated level and range, it must be sampled and analyzed the core sample. But, there are difficulties in sample preparation and determination of the measurement efficiency by self-absorption. In the study, the full energy efficiency of the HPGe detector was compared with the measured value using standard source and the calculated one using Monte Carlo simulation. Also, self-absorption effects due to the density and component change of the concrete were calculated using the Monte Carlo method. Its results will be used radioactivity analysis of the real concrete core sample in the future.

  10. Monte Carlo Calculation for Landmine Detection using Prompt Gamma Neutron Activation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungil; Kim, Seong Bong; Yoo, Suk Jae [Plasma Technology Research Center, Gunsan (Korea, Republic of); Shin, Sung Gyun; Cho, Moohyun [POSTECH, Pohang (Korea, Republic of); Han, Seunghoon; Lim, Byeongok [Samsung Thales, Yongin (Korea, Republic of)

    2014-05-15

    Identification and demining of landmines are a very important issue for the safety of the people and the economic development. To solve the issue, several methods have been proposed in the past. In Korea, National Fusion Research Institute (NFRI) is developing a landmine detector using prompt gamma neutron activation analysis (PGNAA) as a part of the complex sensor-based landmine detection system. In this paper, the Monte Carlo calculation results for this system are presented. Monte Carlo calculation was carried out for the design of the landmine detector using PGNAA. To consider the soil effect, average soil composition is analyzed and applied to the calculation. This results has been used to determine the specification of the landmine detector.

  11. Maximum likelihood method analysis - A procedure to estimate the energy of cosmic rays muons from the observed muon interactions with an electromagnetic calorimeter

    International Nuclear Information System (INIS)

    An electromagnetic sampling calorimeter is under construction in IPNE Bucharest for the determination of the energy of cosmic ray muons in TeV range, consisting of lead (1 cm thick) absorber layer, alternating with scintillator (3 cm thick) layers. The possibility of the estimation of the energy of high-energy cosmic muons is scrutinized using simulations with GEANT code of the response of the detector (30 layers) to incident energies in the range 1-30 TeV. A Maximum Likelihood Method analysis is presented as a procedure to determine the muon energy, being applied to the detector response to the muons of discrete energy and to the muons distributed according to the cosmic ray spectrum. (author) 17 Figs., 2 Tabs., 15 Refs

  12. Monte-Carlo Analysis of the Flavour Changing Neutral Current B \\to Gamma at Babar

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D. [Imperial College, London (United Kingdom)

    2001-09-01

    The main theme of this thesis is a Monte-Carlo analysis of the rare Flavour Changing Neutral Current (FCNC) decay b→sγ. The analysis develops techniques that could be applied to real data, to discriminate between signal and background events in order to make a measurement of the branching ratio of this rare decay using the BaBar detector. Also included in this thesis is a description of the BaBar detector and the work I have undertaken in the development of the electronic data acquisition system for the Electromagnetic calorimeter (EMC), a subsystem of the BaBar detector.

  13. First Monte Carlo analysis of fragmentation functions from single-inclusive $e^+ e^-$ annihilation

    CERN Document Server

    Sato, N; Melnitchouk, W; Hirai, M; Kumano, S; Accardi, A

    2016-01-01

    We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.

  14. Restricted maximum likelihood analysis of linkage between genetic markers and quantitative trait loci for a granddaughter design.

    NARCIS (Netherlands)

    Arendonk, van J.A.M.; Tier, B.; Bink, M.C.A.M.; Bovenhuis, H.

    1998-01-01

    REML for the estimation of location and variance of a single quantitative trait locus, together with polygenic and residual variance, is described for the analysis of a granddaughter design. The method is based on a mixed linear model that includes the allelic effects of the quantitative trait locus

  15. Image properties of list mode likelihood reconstruction for a rectangular positron emission mammography with DOI measurements

    International Nuclear Information System (INIS)

    A positron emission mammography scanner is under development at our Laboratory. The tomograph has a rectangular geometry consisting of four banks of detector modules. For each detector, the system can measure the depth of interaction information inside the crystal. The rectangular geometry leads to irregular radial and angular sampling and spatially variant sensitivity that are different from conventional PET systems. Therefore, it is of importance to study the image properties of the reconstructions. We adapted the theoretical analysis that we had developed for conventional PET systems to the list mode likelihood reconstruction for this tomograph. The local impulse response and covariance of the reconstruction can be easily computed using FFT. These theoretical results are also used with computer observer models to compute the signal-to-noise ratio for lesion detection. The analysis reveals the spatially variant resolution and noise properties of the list mode likelihood reconstruction. The theoretical predictions are in good agreement with Monte Carlo results

  16. The statistical analysis of dilution series by maximum likelihood: an application to in vitro bioassays estimating the potency of the diphteria component in vaccines by serology

    NARCIS (Netherlands)

    Slob W; Hendriksen CFM

    1989-01-01

    Dit rapport bespreekt de analyse van verdunningsreeksen met maximum likelihood, met als toepassing het in vitro serologisch toetsen van de werkzaamheid van bacteriele vaccins voor de mens. Met computersimulaties wordt aangetoond dat de maximum likelihood methode adequaat is voor de in het werkzaamh

  17. MKENO-DAR: a direct angular representation Monte Carlo code for criticality safety analysis

    International Nuclear Information System (INIS)

    Improving the Monte Carlo code MULTI-KENO, the MKENO-DAR (Direct Angular Representation) code has been developed for criticality safety analysis in detail. A function was added to MULTI-KENO for representing anisotropic scattering strictly. With this function, the scattering angle of neutron is determined not by the average scattering angle μ-bar of the Pl Legendre polynomial but by the random work operation using probability distribution function produced with the higher order Legendre polynomials. This code is avilable for the FACOM-M380 computer. This report is a computer code manual for MKENO-DAR. (author)

  18. FTREE. Single-history Monte Carlo analysis for radiation detection and measurement

    International Nuclear Information System (INIS)

    This work introduces FTREE, which describes radiation cascades following impingement of a source particle on matter. The ensuing radiation field is characterised interaction by interaction, accounting for each generation of secondaries recursively. Each progeny is uniquely differentiated and catalogued into a family tree; the kinship is identified without ambiguity. This mode of observation, analysis and presentation goes beyond present-day detector technologies, beyond conventional Monte Carlo simulations and beyond standard pedagogy. It is able to observe rare events far out in the Gaussian tail which would have been lost in averaging-events less probable, but no less correct in physics. (author)

  19. Microlens assembly error analysis for light field camera based on Monte Carlo method

    Science.gov (United States)

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  20. Markov chain Monte Carlo linkage analysis of a complex qualitative phenotype.

    Science.gov (United States)

    Hinrichs, A; Lin, J H; Reich, T; Bierut, L; Suarez, B K

    1999-01-01

    We tested a new computer program, LOKI, that implements a reversible jump Markov chain Monte Carlo (MCMC) technique for segregation and linkage analysis. Our objective was to determine whether this software, designed for use with continuously distributed phenotypes, has any efficacy when applied to the discrete disease states of the simulated data from the Mordor data from GAW Problem 1. Although we were able to identify the genomic location for two of the three quantitative trait loci by repeated application of the software, the MCMC sampler experienced significant mixing problems indicating that the method, as currently formulated in LOKI, was not suitable for the discrete phenotypes in this data set. PMID:10597502

  1. Monte Carlo depletion analysis of a PWR integral fuel burnable absorber by MCNAP

    International Nuclear Information System (INIS)

    The MCNAP is a personal computer-based continuous energy Monte Carlo (MC) neutronics analysis program written on C++ language. For the purpose of examining its qualification, a comparison of the depletion analysis of three integral burnable fuel assemblies of the pressurized water reactor(PWR) by the MCNAP and deterministic fuel assembly(FA) design vendor codes is presented. It is demonstrated that the continuous energy MC calculation by the MCNAP can provide a very accurate neutronics analysis method for the burnable absorber FA's. It is also demonstrated that the parallel MC computation by adoption of multiple PC's enables one to complete the lifetime depletion analysis of the FA's within the order of hours instead of order of days otherwise. (orig.)

  2. Data uncertainty analysis for safety assessment of HLW disposal by the Monte Carlo simulation

    International Nuclear Information System (INIS)

    Based on the conceptual model of the Reference Case, which is defined as the baseline for various cases in the safety assessment of the H12 report, a new probabilistic simulation code that allowed rapid evaluation of the effect of data uncertainty has been developed. Using this code, probabilistic simulation was performed by the Monte Carlo method and conservativeness and sufficiency of the safety assessment in the H12 report was confirmed, which was performed deterministically. In order to examine the important parameter, this study includes the analysis of sensitivity structure among the input and the output. Cluster analysis and multiple regression analysis for each cluster were applied in this analysis. As a result, the transmissivity had a strong influence on the uncertainty of the system performance. Furthermore, this approach was confirmed to evaluate the global sensitive parameters and local sensitive parameters that strongly influence the space of the partial simulation results. (author)

  3. Uncertainty Assessment of the Core Thermal-Hydraulic Analysis Using the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sun Rock; Yoo, Jae Woon; Hwang, Dae Hyun; Kim, Sang Ji [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    In the core thermal-hydraulic design of a sodium cooled fast reactor, the uncertainty factor analysis is a critical issue in order to assure safe and reliable operation. The deviations from the nominal values need to be quantitatively considered by statistical thermal design methods. The hot channel factors (HCF) were employed to evaluate the uncertainty in the early design such as the CRBRP. The improved thermal design procedure (ISTP) calculates the overall uncertainty based on the Root Sum Square technique and sensitivity analyses of each design parameters. Another way to consider the uncertainties is to use the Monte Carlo method (MCM). In this method, all the input uncertainties are randomly sampled according to their probability density functions and the resulting distribution for the output quantity is analyzed. It is able to directly estimate the uncertainty effects and propagation characteristics for the present thermalhydraulic model. However, it requires a huge computation time to get a reliable result because the accuracy is dependent on the sampling size. In this paper, the analysis of uncertainty factors using the Monte Carlo method is described. As a benchmark model, the ORNL 19 pin test is employed to validate the current uncertainty analysis method. The thermal-hydraulic calculation is conducted using the MATRA-LMR program which was developed at KAERI based on the subchannel approach. The results are compared with those of the hot channel factors and the improved thermal design procedure

  4. Monte Carlo analysis of Very High Temperature gas-cooled Reactor for hydrogen production

    International Nuclear Information System (INIS)

    This work has been pursued during 2 years. In the first year, the development of Monte Carlo analysis method for pebble-type VHTR core was focused with zero-power reactor. The pebble-bed cores of HTR-PROTEUS critical facility in Switzerland were selected for the benchmark model and detailed full-scope MCNP modeling was carried out. Especially, accurate and effective modeling of UO2 particles and their distributions in fuel pebble was pursed as well as the pebbles distribution within core region. After the detailed MCNP modeling of the whole facility, analyses of nuclear characteristics were carried out, and the results were compared with experiments and those of other research groups. The effective multiplication factors (keff) were calculated for the two HTR-PROTEUS cores, and then homogenization effect of TRISO fuel on criticality investigated. Control rod and shutdown rod worths were also calculated, and the criticality calculations with different cross-section library and various reflector thickness were carried out. In the 2nd year of the research period, the Monte Carol analysis method developed in the 1st year was applied to the core with thermal power. The pebble-bed cores of HTR-10 test reactor in China were selected for the benchmark model. After the detailed full-scope MCNP modeling the Monte Carlo analysis results calculated in this work were verified with the benchmark results which have been done for first criticality state and initial core

  5. Aplikasi Analisis Faktor Dengan Metode Principal Component Analysis Dan Maximum Likelihood Dalam Faktor-faktor Yang Memengaruhi Pemberian Makanan Tambahan Pada Bayi Usia 0-6 Bulan Di Desa Pematang Panjang Kecamatan Air Putih Kabupaten Batubara Tahun 2013

    OpenAIRE

    Simarmata, Iska

    2014-01-01

    Factor analysis is one of the multivariate statistical analysis techniques.This analysis is included in the interdependence technique with the aim of reconciling data in a grouping or the formation of a new set of variableswhich is named factor. The parameter estimation that is commonly used in this analysis is the principal component analysis method and the maximum likelihood method. This research aims to know the comparison of suitability of the model by principal component method and ma...

  6. Correlation Between Brain Activation Changes and Cognitive Improvement Following Cognitive Remediation Therapy in Schizophrenia: An Activation Likelihood Estimation Meta-analysis

    Institute of Scientific and Technical Information of China (English)

    Yan-Yan Wei; Ji-Jun Wang; Chao Yan; Zi-Qiang Li; Xiao Pan; Yi Cui; Tong Su

    2016-01-01

    Background:Several studies using functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have indicated that cognitive remediation therapy (CRT) might improve cognitive function by changing brain activations in patients with schizophrenia.However,the results were not consistent in these changed brain areas in different studies.The present activation likelihood estimation (ALE) meta-analysis was conducted to investigate whether cognitive function change was accompanied by the brain activation changes,and where the main areas most related to these changes were in schizophrenia patients after CRT.Analyses of whole-brain studies and whole-brain + region of interest (ROI) studies were compared to explore the effect of the different methodologies on the results.Methods:A computerized systematic search was conducted to collect fMRI and PET studies on brain activation changes in schizophrenia patients from pre-to post-CRT.Nine studies using fMRI techniques were included in the meta-analysis.Ginger ALE 2.3.1 was used to perform meta-analysis across these imaging studies.Results:The main areas with increased brain activation were in frontal and parietal lobe,including left medial frontal gyrus,left inferior frontal gyrus,right middle frontal gyrus,right postcentral gyrus,and inferior parietal lobule in patients after CRT,yet no decreased brain activation was found.Although similar increased activation brain areas were identified in ALE with or without ROI studies,analysis including ROI studies had a higher ALE value.Conclusions:The current findings suggest that CRT might improve the cognition of schizophrenia patients by increasing activations of the frontal and parietal lobe.In addition,it might provide more evidence to confirm results by including ROI studies in ALE meta-analysis.

  7. Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory

    CERN Document Server

    Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O

    2014-01-01

    We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.

  8. Current status of safety analysis code MARS and uncertainty quantification by Monte-Carlo method

    International Nuclear Information System (INIS)

    MARS (Multi-dimensional Analysis of Reactor Safety) code has been developed since 1997 for a realistic multi-dimensional thermal-hydraulic system analysis of light water reactor transients. The backbones of MARS are the RELAP5/MOD3.2.1.2 and COBRA-TF codes of USNRC. These two codes were consolidated into a single code by integrating the hydrodynamic solution schemes. New multidimensional TH model has been developed and extended to enable integrated coupled TH analysis through code coupling technique, DLL. The motivation for uncertainty quantification of MARS is considered twofold, 1) to provide “best estimate plus uncertainty” analysis for licensing of commercial power reactor with realistic margins, and 2) to provide support to design and/or validation related analysis for research and production reactors. An assessment of the current LBLOCA uncertainty analysis methodology has been done using data from an integral thermal-hydraulic experiment LOFT L2-5. Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks formula. The calculation has been done within reasonable CPU time on PC cluster system. Monte-Carlo exercise shows that the 95% upper limit value can be obtained well with 95% confidence level by Wilks formula, although we have to endure 5% risk of PCT under-prediction. The result also shows the statistical fluctuation of limit value using Wilks 1st order is as large as PCT uncertainty itself. The main conclusion is that it is desirable to increase the order of Wilks formula to be higher than the second order to get the reliable safety margin of current design feature. (author)

  9. Maximum Likelihood Mosaics

    CERN Document Server

    Pires, Bernardo Esteves

    2010-01-01

    The majority of the approaches to the automatic recovery of a panoramic image from a set of partial views are suboptimal in the sense that the input images are aligned, or registered, pair by pair, e.g., consecutive frames of a video clip. These approaches lead to propagation errors that may be very severe, particularly when dealing with videos that show the same region at disjoint time intervals. Although some authors have proposed a post-processing step to reduce the registration errors in these situations, there have not been attempts to compute the optimal solution, i.e., the registrations leading to the panorama that best matches the entire set of partial views}. This is our goal. In this paper, we use a generative model for the partial views of the panorama and develop an algorithm to compute in an efficient way the Maximum Likelihood estimate of all the unknowns involved: the parameters describing the alignment of all the images and the panorama itself.

  10. Use of Monte Carlo simulations for cultural heritage X-ray fluorescence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Brunetti, Antonio, E-mail: brunetti@uniss.it [Polcoming Department, University of Sassari (Italy); Golosio, Bruno [Polcoming Department, University of Sassari (Italy); Schoonjans, Tom; Oliva, Piernicola [Chemical and Pharmaceutical Department, University of Sassari (Italy)

    2015-06-01

    The analytical study of Cultural Heritage objects often requires merely a qualitative determination of composition and manufacturing technology. However, sometimes a qualitative estimate is not sufficient, for example when dealing with multilayered metallic objects. Under such circumstances a quantitative estimate of the chemical contents of each layer is sometimes required in order to determine the technology that was used to produce the object. A quantitative analysis is often complicated by the surface state: roughness, corrosion, incrustations that remain even after restoration, due to efforts to preserve the patina. Furthermore, restorers will often add a protective layer on the surface. In all these cases standard quantitative methods such as the fundamental parameter based approaches are generally not applicable. An alternative approach is presented based on the use of Monte Carlo simulations for quantitative estimation. - Highlights: • We present an application of fast Monte Carlo codes for Cultural Heritage artifact analysis. • We show applications to complex multilayer structures. • The methods allow estimating both the composition and the thickness of multilayer, such as bronze with patina. • The performance in terms of accuracy and uncertainty is described for the bronze samples.

  11. Speciation model selection by Monte Carlo analysis of optical absorption spectra: Plutonium(IV) nitrate complexes

    International Nuclear Information System (INIS)

    Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are β1=3.2±0.5 and β2=11.2±1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy

  12. Use of Monte Carlo simulations for cultural heritage X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    The analytical study of Cultural Heritage objects often requires merely a qualitative determination of composition and manufacturing technology. However, sometimes a qualitative estimate is not sufficient, for example when dealing with multilayered metallic objects. Under such circumstances a quantitative estimate of the chemical contents of each layer is sometimes required in order to determine the technology that was used to produce the object. A quantitative analysis is often complicated by the surface state: roughness, corrosion, incrustations that remain even after restoration, due to efforts to preserve the patina. Furthermore, restorers will often add a protective layer on the surface. In all these cases standard quantitative methods such as the fundamental parameter based approaches are generally not applicable. An alternative approach is presented based on the use of Monte Carlo simulations for quantitative estimation. - Highlights: • We present an application of fast Monte Carlo codes for Cultural Heritage artifact analysis. • We show applications to complex multilayer structures. • The methods allow estimating both the composition and the thickness of multilayer, such as bronze with patina. • The performance in terms of accuracy and uncertainty is described for the bronze samples

  13. Benchmark analysis of the TRIGA MARK II research reactor using Monte Carlo techniques

    International Nuclear Information System (INIS)

    This study deals with the neutronic analysis of the current core configuration of a 3-MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The 3-D continuous-energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the TRIGA core. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity of the core were precisely described. Continuous energy cross-section data from ENDF/B-VI and ENDF/B-V and S(α,β) scattering functions from the ENDF/B-VI library were used. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics was established by benchmarking the TRIGA experiments. The effective multiplication factor, power distribution and peaking factors, neutron flux distribution, and reactivity experiments comprising control rod worths, critical rod height, excess reactivity and shutdown margin were used in the validation process. The MCNP predictions and the experimentally determined values are found to be in very good agreement, which indicates that the simulation of TRIGA reactor is treated adequately

  14. Neutronic Analysis of the 3 MW TRIGA MARK II Research Reactor, Part I: Monte Carlo Simulation

    International Nuclear Information System (INIS)

    This study deals with the neutronic analysis of the current core configuration of a 3 MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The three-dimensional continuous-energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the TRIGA core. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity of the core were precisely described. Continuous energy cross-section data from ENDF/B-VI and S(α, β) scattering functions from the ENDF/B-V library were used. The validation of the model against benchmark experimental results is presented. The MCNP predictions and the experimentally determined values are found to be in very good agreement, which indicates that the Monte Carlo model is correctly simulating the TRIGA reactor. (author)

  15. The statistical analysis of dilution series by maximum likelihood: an application to in vitro bioassays estimating the potency of the diphteria component in vaccines by serology

    OpenAIRE

    Slob W; Hendriksen CFM

    1989-01-01

    Dit rapport bespreekt de analyse van verdunningsreeksen met maximum likelihood, met als toepassing het in vitro serologisch toetsen van de werkzaamheid van bacteriele vaccins voor de mens. Met computersimulaties wordt aangetoond dat de maximum likelihood methode adequaat is voor de in het werkzaamheidsonderzoek gebruikelijke steekproefomvang. De relatie tussen de antitoxine respons en vaccinverdunning wordt goed beschreven met een rechte lijn op dubbele log-schaal binnen de gebruikelijke expe...

  16. The timing resolution of scintillation-detector systems: Monte Carlo analysis

    International Nuclear Information System (INIS)

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and

  17. The timing resolution of scintillation-detector systems: Monte Carlo analysis.

    Science.gov (United States)

    Choong, Woon-Seng

    2009-11-01

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and

  18. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi- tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some mild conditions. When the background driving Lévy process is of type A or B, we show that the intensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  19. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    Science.gov (United States)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  20. A bottom collider vertex detector design, Monte-Carlo simulation and analysis package

    International Nuclear Information System (INIS)

    A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the ''golden'' CP violating mode Bd → π+π- is presented. These calculations have been done at FNAL energy (√s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs

  1. Core-scale solute transport model selection using Monte Carlo analysis

    CERN Document Server

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  2. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  3. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  4. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Energy Technology Data Exchange (ETDEWEB)

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  5. Development of a component Monte Carlo program for accident sequence analysis to apply for reprocessing facility

    International Nuclear Information System (INIS)

    In consideration of application for reprocessing facility, where a variety of causal events such as equipment failure and human error might occur, and the event progression would take place with relatively substantial time delay before getting to the accident stage, a component Monte Carlo program for accident sequence analysis has been developed to pursue chronologically the probabilistic behavior of each component failure and repair in an exact manner. In comparison with analytical formulation and its calculated results, this Monte Carlo technique is shown to predict a reasonable result. Then, taking an example for a sample problem from a German reprocessing facility model, an accident sequence of red-oil explosion in a plutonium evaporator is analyzed to give a comprehensive interpretation about statistic variation range and computer time elapsed for random walk history calculations. Furthermore, to discuss about its applicability for the practical case of plant system with complex component constitution, a possibility of drastic speed-up of computation is shown by parallelization of the computer program. (author)

  6. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    International Nuclear Information System (INIS)

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%

  7. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients.

    Directory of Open Access Journals (Sweden)

    Cosimo Urgesi

    2014-05-01

    Full Text Available Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others’ actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others’ actions. The specific anatomical substrates of such neuropsychological deficits however are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding.

  8. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  9. Refined Monte Carlo analysis of the H.B. Robinson-2 reactor pressure vessel dosimetry benchmark

    International Nuclear Information System (INIS)

    Highlights: → Activation of in- and ex-vessel radiometric dosimeters is studied with MCNPX. → Influences of neutron source definition and cross-section libraries are examined. → 237Np(n,f) energy cut-off is set at 10 eV to cover the reaction completely. → Different methods for deriving activities from reaction rates are compared. → Uncertainties are evaluated and are below 10%, final C/E ratios being within 15%. - Abstract: Refined analysis, based on use of the Monte Carlo code MCNPX-2.4.0, is presented for the 'H.B. Robinson-2 pressure vessel dosimetry benchmark', which is a part of the Radiation Shielding and Dosimetry Experiments Database (SINBAD). First, the performance of the Monte Carlo methodology is reassessed relative to the reported deterministic results obtained with DORT. Thereby, the analysis is accompanied by a quantitative evaluation of the optimal energy cut-off value for each of the in- and ex-vessel dosimeters that were employed. Second, a more realistic definition of the neutron source is implemented than proposed in the benchmark. Thus, the current procedure for power-to-neutron-source-strength conversion, as also for explicitly considering the burnup-dependent fuel assembly-wise average fission neutron spectrum, is found to affect the calculated values significantly. In addition to the modelling refinements made, different approaches are tested for deriving the dosimeter activities, such that the neutron source time-evolution and the activity decay can be taken into account more accurately. Finally, in order to achieve a certain assessment of uncertainties, several sensitivity studies are carried out, e.g. with respect to the nuclear data used for the dosimeters, as also to the assumed physical location of the dosimeters. In spite of some apparent degradation in the prediction of experimental results when refining the Monte Carlo modelling, the final calculation/experiment (C/E) ratios for the measured dosimeter activities remain

  10. Quantum Monte Carlo for Noncovalent Interactions: Analysis of Protocols and Simplified Scheme Attaining Benchmark Accuracy

    CERN Document Server

    Dubecký, Matúš; Jurečka, Petr; Mitas, Lubos; Hobza, Pavel; Otyepka, Michal

    2014-01-01

    Reliable theoretical predictions of noncovalent interaction energies, which are important e.g. in drug-design and hydrogen-storage applications, belong to longstanding challenges of contemporary quantum chemistry. In this respect, the fixed-node diffusion Monte Carlo (FN-DMC) is a promising alternative to the commonly used ``gold standard'' coupled-cluster CCSD(T)/CBS method for its benchmark accuracy and favourable scaling, in contrast to other correlated wave function approaches. This work is focused on the analysis of protocols and possible tradeoffs for FN-DMC estimations of noncovalent interaction energies and proposes a significantly more efficient yet accurate computational protocol using simplified explicit correlation terms. Its performance is illustrated on a number of weakly bound complexes, including water dimer, benzene/hydrogen, T-shape benzene dimer and stacked adenine-thymine DNA base pair complex. The proposed protocol achieves excellent agreement ($\\sim$0.2 kcal/mol) with respect to the reli...

  11. 2D Monte Carlo analysis of radiological risk assessment for the food intake in Korea

    International Nuclear Information System (INIS)

    Most public health risk assessments assume and combine a series of average, conservative and worst-case values to derive an acceptable point estimate of risk. To improve quality of risk information, insight of uncertainty in the assessments is needed and more emphasis is put on the probabilistic risk assessment. Probabilistic risk assessment studies use probability distributions for one or more variables of the risk equation in order to quantitatively characterize variability and uncertainty. In this study, an advanced technique called the two-dimensional Monte Carlo analysis (2D MCA) is applied to estimation of internal doses from intake of radionuclides in foodstuffs and drinking water in Korea. The variables of the risk model along with the parameters of these variables are described in terms of probability density functions (PDFs). In addition, sensitivity analyses were performed to identify important factors to the radiation doses. (author)

  12. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    . Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed......Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction...... the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability...

  13. ANALYSIS OF NEIGHBORHOOD IMPACTS ARISING FROM IMPLEMENTATION OF SUPERMARKETS IN CITY OF SÃO CARLOS

    Directory of Open Access Journals (Sweden)

    Pedro Silveira Gonçalves Neto

    2010-12-01

    Full Text Available The study included supermarkets of different sizes (small, medium and large - defined based on the area occupied by the project and volume of activity located in São Carlos (São Paulo state, Brazil to evaluate the influence of the size of the project impacts neighborhood generated by these supermarkets. It was considered the influence of factors like the location of enterprises, size of the building, and areas of influence contribute to the increased population density and change of use of buildings since it was post-deployment analysis. The relationship between the variables of the spatial impacts was made possible by the use of geographic information system. It was noted that the legislation does not have suitable conditions to guide the studies of urban impacts due to the complex integration between the urban and impacting components.

  14. Ligand-receptor binding kinetics in surface plasmon resonance cells: A Monte Carlo analysis

    CERN Document Server

    Carroll, Jacob; Forsten-Williams, Kimberly; Täuber, Uwe C

    2016-01-01

    Surface plasmon resonance (SPR) chips are widely used to measure association and dissociation rates for the binding kinetics between two species of chemicals, e.g., cell receptors and ligands. It is commonly assumed that ligands are spatially well mixed in the SPR region, and hence a mean-field rate equation description is appropriate. This approximation however ignores the spatial fluctuations as well as temporal correlations induced by multiple local rebinding events, which become prominent for slow diffusion rates and high binding affinities. We report detailed Monte Carlo simulations of ligand binding kinetics in an SPR cell subject to laminar flow. We extract the binding and dissociation rates by means of the techniques frequently employed in experimental analysis that are motivated by the mean-field approximation. We find major discrepancies in a wide parameter regime between the thus extracted rates and the known input simulation values. These results underscore the crucial quantitative importance of s...

  15. Outlier detection in near-infrared spectroscopic analysis by using Monte Carlo cross-validation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An outlier detection method is proposed for near-infrared spectral analysis. The underlying philosophy of the method is that,in random test(Monte Carlo) cross-validation,the probability of outliers presenting in good models with smaller prediction residual error sum of squares(PRESS) or in bad models with larger PRESS should be obviously different from normal samples. The method builds a large number of PLS models by using random test cross-validation at first,then the models are sorted by the PRESS,and at last the outliers are recognized according to the accumulative probability of each sample in the sorted models. For validation of the proposed method,four data sets,including three published data sets and a large data set of tobacco lamina,were investigated. The proposed method was proved to be highly efficient and veracious compared with the conventional leave-one-out(LOO) cross validation method.

  16. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms.

    Science.gov (United States)

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  17. Monte Carlo analysis of doppler reactivity coefficient for UO2 pin cell geometry

    International Nuclear Information System (INIS)

    Monte Carlo analysis has been performed to investigate the impact of the exact resonance elastic scattering model on the Doppler reactivity coefficient for the UO2 pin cell geometry with the parabolic temperature profile. As a result, the exact scattering model affects the coefficient similarly for both the flat and parabolic temperature profiles; it increases the contribution of uranium-238 resonance capture in the energy region from ∼16 eV to ∼150 eV and does uniformly in the radial direction. Then the following conclusions hold for both the exact and asymptotic resonance scattering models. The Doppler reactivity coefficient is well reproduced with the definition of the effective fuel temperature (equivalent flat temperature) proposed by Grandi et al. In addition, the effective fuel temperature volume-averaged over the entire fuel region negatively overestimates the reference Doppler reactivity coefficient but the calculated one can be significantly improved by dividing the fuel region into a few equi-volumes. (author)

  18. New approach to spectrum analysis. Iterative Monte Carlo simulations and fitting

    International Nuclear Information System (INIS)

    A novel spectrum analysis code which combines the Monte Carlo simulations with spectrum fitting is introduced. The shapes used in the fitting are obtained from the simulations. The code is developed especially to analyze complex alpha particle energy spectra - such as those obtained from non-processed air filters, swipe samples or isolated particles emitting alpha radiation. In addition to activities of the nuclides present in the sample, the code can provide source characterization. In particular, the code can be used to characterize samples of nuclear material, i.e. those containing fissionable isotopes such as 235U or 239Pu. In the present paper we illustrate the use of the code to identify and quantify alpha-particle emitting isotopes in a depleted U projectile found in Kosovo. (author)

  19. Contrast to Noise Ratio and Contrast Detail Analysis in Mammography:A Monte Carlo Study

    Science.gov (United States)

    Metaxas, V.; Delis, H.; Kalogeropoulou, C.; Zampakis, P.; Panayiotakis, G.

    2015-09-01

    The mammographic spectrum is one of the major factors affecting image quality in mammography. In this study, a Monte Carlo (MC) simulation model was used to evaluate image quality characteristics of various mammographic spectra. The anode/filter combinations evaluated, were those traditionally used in mammography, for tube voltages between 26 and 30 kVp. The imaging performance was investigated in terms of Contrast to Noise Ratio (CNR) and Contrast Detail (CD) analysis, by involving human observers, utilizing a mathematical CD phantom. Soft spectra provided the best characteristics in terms of both CNR and CD scores, while tube voltage had a limited effect. W-anode spectra filtered with k-edge filters demonstrated an improved performance, that sometimes was better compared to softer x-ray spectra, produced by Mo or Rh anode. Regarding the filter material, k-edge filters showed superior performance compared to Al filters.

  20. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    Science.gov (United States)

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  1. Heat-Flux Analysis of Solar Furnace Using the Monte Carlo Ray-Tracing Method

    International Nuclear Information System (INIS)

    An understanding of the concentrated solar flux is critical for the analysis and design of solar-energy-utilization systems. The current work focuses on the development of an algorithm that uses the Monte Carlo ray-tracing method with excellent flexibility and expandability; this method considers both solar limb darkening and the surface slope error of reflectors, thereby analyzing the solar flux. A comparison of the modeling results with measurements at the solar furnace in Korea Institute of Energy Research (KIER) show good agreement within a measurement uncertainty of 10%. The model evaluates the concentration performance of the KIER solar furnace with a tracking accuracy of 2 mrad and a maximum attainable concentration ratio of 4400 sun. Flux variations according to measurement position and flux distributions depending on acceptance angles provide detailed information for the design of chemical reactors or secondary concentrators

  2. Monte Carlo analysis of direct measurements of the fission neutron yield per absorption by 233U and 235U of monochromatic neutrons

    International Nuclear Information System (INIS)

    Monte Carlo analysis of the measurements of Smith et al. of the number of fission neutrons produced per neutron absorbed, eta, for 2200 m/sec neutrons absorbed by 233U and 235U yields: eta2200233 = 2.2993 +- 0.0082 and eta2200235 = 2.0777 +- 0.0064. The standard deviations include Monte Carlo, cross section, and experimental uncertainties. The Monte Carlo analysis was confirmed by calculating measured quantities used by the experimentalists in determining eta2200

  3. Enrichment effects on CANDU-SEU spent fuel Monte Carlo shielding analysis

    International Nuclear Information System (INIS)

    Shielding analyses are an essential component of the nuclear safety, the estimations of radiation doses in order to reduce them under specified limitation values being the main task here. According to IAEA data, more than 10 millions packages containing radioactive materials are annually transported world wide. All the problems arisen from the safe radioactive materials transport assurance must be carefully settled. Last decade, both for operating reactors and future reactor projects, a general trend to raise the discharge fuel burnup has been recorded world wide. For CANDU type reactors, the most attractive solution seems to be SEU and RU fuels utilization. The basic tasks accomplished by the shielding calculations in a nuclear safety analysis consist in dose rates calculation, to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. The paper aims to study the effects induced by fuel enrichment variation on CANDU-SEU spent fuel photon dose rates for a Monte Carlo shielding analysis applied to spent fuel transport after a defined cooling period in the NPP pools. The fuel bundles projects considered here have 43 Zircaloy rods, filled with SEU fuel pellets, the fuel having different enrichment in U-235. All the geometrical and material data related on the cask were considered according to the shipping cask type B model. After a photon source profile calculation by using ORIGEN-S code, in order to perform the shielding calculations, Monte Carlo MORSE-SGC code has been used, both codes being included in the ORNL's SCALE 5 system. The photon dose rates to the shipping cask wall and in air, at different distances from the cask, have been estimated. Finally, a photon dose rates comparison for different fuel enrichments has been performed. (author)

  4. Modified maximum likelihood registration based on information fusion

    Institute of Scientific and Technical Information of China (English)

    Yongqing Qi; Zhongliang Jing; Shiqiang Hu

    2007-01-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  5. Improving Markov Chain Monte Carlo algorithms in LISA Pathfinder Data Analysis

    International Nuclear Information System (INIS)

    The LISA Pathfinder mission (LPF) aims to test key technologies for the future LISA mission. The LISA Technology Package (LTP) on-board LPF will consist of an exhaustive suite of experiments and its outcome will be crucial for the future detection of gravitational waves. In order to achieve maximum sensitivity, we need to have an understanding of every instrument on-board and parametrize the properties of the underlying noise models. The Data Analysis team has developed algorithms for parameter estimation of the system. A very promising one implemented for LISA Pathfinder data analysis is the Markov Chain Monte Carlo. A series of experiments are going to take place during flight operations and each experiment is going to provide us with essential information for the next in the sequence. Therefore, it is a priority to optimize and improve our tools available for data analysis during the mission. Using a Bayesian framework analysis allows us to apply prior knowledge for each experiment, which means that we can efficiently use our prior estimates for the parameters, making the method more accurate and significantly faster. This, together with other algorithm improvements, will lead us to our main goal, which is no other than creating a robust and reliable tool for parameter estimation during the LPF mission.

  6. Subtracting and Fitting Histograms using Profile Likelihood

    CERN Document Server

    D'Almeida, F M L

    2008-01-01

    It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.

  7. Statistical Modification Analysis of Helical Planetary Gears based on Response Surface Method and Monte Carlo Simulation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun; GUO Fan

    2015-01-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  8. Maximum likelihood topographic map formation.

    Science.gov (United States)

    Van Hulle, Marc M

    2005-03-01

    We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence. PMID:15802004

  9. Invariants and Likelihood Ratio Statistics

    OpenAIRE

    McCullagh, P.; Cox, D. R.

    1986-01-01

    Because the likelihood ratio statistic is invariant under reparameterization, it is possible to make a large-sample expansion of the statistic itself and of its expectation in terms of invariants. In particular, the Bartlett adjustment factor can be expressed in terms of invariant combinations of cumulants of the first two log-likelihood derivatives. Such expansions are given, first for a scalar parameter and then for vector parameters. Geometrical interpretation is given where possible and s...

  10. Inference in HIV dynamics models via hierarchical likelihood

    CERN Document Server

    Commenges, D; Putter, H; Thiebaut, R

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.

  11. New strategies of sensitivity analysis capabilities in continuous-energy Monte Carlo code RMC

    International Nuclear Information System (INIS)

    Highlights: • Data decomposition techniques are proposed for memory reduction. • New strategies are put forward and implemented in RMC code to improve efficiency and accuracy for sensitivity calculations. • A capability to compute region-specific sensitivity coefficients is developed in RMC code. - Abstract: The iterated fission probability (IFP) method has been demonstrated to be an accurate alternative for estimating the adjoint-weighted parameters in continuous-energy Monte Carlo forward calculations. However, the memory requirements of this method are huge especially when a large number of sensitivity coefficients are desired. Therefore, data decomposition techniques are proposed in this work. Two parallel strategies based on the neutron production rate (NPR) estimator and the fission neutron population (FNP) estimator for adjoint fluxes, as well as a more efficient algorithm which has multiple overlapping blocks (MOB) in a cycle, are investigated and implemented in the continuous-energy Reactor Monte Carlo code RMC for sensitivity analysis. Furthermore, a region-specific sensitivity analysis capability is developed in RMC. These new strategies, algorithms and capabilities are verified against analytic solutions of a multi-group infinite-medium problem and against results from other software packages including MCNP6, TSUANAMI-1D and multi-group TSUNAMI-3D. While the results generated by the NPR and FNP strategies agree within 0.1% of the analytic sensitivity coefficients, the MOB strategy surprisingly produces sensitivity coefficients exactly equal to the analytic ones. Meanwhile, the results generated by the three strategies in RMC are in agreement with those produced by other codes within a few percent. Moreover, the MOB strategy performs the most efficient sensitivity coefficient calculations (offering as much as an order of magnitude gain in FoMs over MCNP6), followed by the NPR and FNP strategies, and then MCNP6. The results also reveal that these

  12. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  13. Use of Monte Carlo Bootstrap Method in the Analysis of Sample Sufficiency for Radioecological Data

    International Nuclear Information System (INIS)

    There are operational difficulties in obtaining samples for radioecological studies. Population data may no longer be available during the study and obtaining new samples may not be possible. These problems do the researcher sometimes work with a small number of data. Therefore, it is difficult to know whether the number of samples will be sufficient to estimate the desired parameter. Hence, it is critical do the analysis of sample sufficiency. It is not interesting uses the classical methods of statistic to analyze sample sufficiency in Radioecology, because naturally occurring radionuclides have a random distribution in soil, usually arise outliers and gaps with missing values. The present work was developed aiming to apply the Monte Carlo Bootstrap method in the analysis of sample sufficiency with quantitative estimation of a single variable such as specific activity of a natural radioisotope present in plants. The pseudo population was a small sample with 14 values of specific activity of 226Ra in forage palm (Opuntia spp.). Using the R software was performed a computational procedure to calculate the number of the sample values. The re sampling process with replacement took the 14 values of original sample and produced 10,000 bootstrap samples for each round. Then was calculated the estimated average θ for samples with 2, 5, 8, 11 and 14 values randomly selected. The results showed that if the researcher work with only 11 sample values, the average parameter will be within a confidence interval with 90% probability . (Author)

  14. Criticality qualification of a new Monte Carlo code for reactor core analysis

    International Nuclear Information System (INIS)

    In order to accurately simulate Accelerator Driven Systems (ADS), the utilization of at least two computational tools is necessary (the thermal-hydraulic problem is not considered in the frame of this work), namely: (a) A High Energy Physics (HEP) code system dealing with the 'Accelerator part' of the installation, i.e. the computation of the spectrum, intensity and spatial distribution of the neutrons source created by (p, n) reactions of a proton beam on a target and (b) a neutronics code system, handling the 'Reactor part' of the installation, i.e. criticality calculations, neutron transport, fuel burn-up and fission products evolution. In the present work, a single computational tool, aiming to analyze an ADS in its integrity and also able to perform core analysis for a conventional fission reactor, is proposed. The code is based on the well qualified HEP code GEANT (version 3), transformed to perform criticality calculations. The performance of the code is tested against two qualified neutronics code systems, the diffusion/transport SCALE-CITATION code system and the Monte Carlo TRIPOLI code, in the case of a research reactor core analysis. A satisfactory agreement was exhibited by the three codes.

  15. Monte Carlo shielding comparative analysis applied to TRIGA HEU and LEU spent fuel transport

    International Nuclear Information System (INIS)

    The paper is a comparative study of LEU (low uranium enrichment) and HEU (highly enriched uranium) fuel utilization effects for the shielding analysis during spent fuel transport. A comparison against the measured data for HEU spent fuel, available from the last stage of spent fuel repatriation fulfilled in the summer of 2008, is also presented. All geometrical and material data for the shipping cask were considered according to NAC-LWT Cask approved model. The shielding analysis estimates radiation doses to shipping cask wall surface, and in air at 1 m and 2 m, respectively, from the cask by means of 3-dimensional Monte Carlo MORSE-SGC code. Before loading into the shipping cask TRIGA spent fuel source terms and spent fuel parameters have been obtained by means of ORIGEN-S code. Both codes are included in ORNL's SCALE 5 programs package. 60Co radioactivity is important for HEU spent fuel; actinides contribution to total fuel radioactivity is low. For LEU spent fuel 60Co radioactivity is insignificant; actinides contribution to total fuel radioactivity is high. Dose rates for both HEU and LEU fuel contents are below regulatory limits, LEU spent fuel photon dose rates being greater than the HEU ones. The comparison between HEU spent fuel theoretical and measured dose rates in selected measuring points shows a good agreement, the calculated values being greater than the measured ones both to cask wall surface (about 34% relative difference) and in air at 1 m distance from the cask surface (about 15% relative difference). (authors)

  16. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    International Nuclear Information System (INIS)

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

  17. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    International Nuclear Information System (INIS)

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation. (author)

  18. Criticality qualification of a new Monte Carlo code for reactor core analysis

    Energy Technology Data Exchange (ETDEWEB)

    Catsaros, N. [Institute of Nuclear Technology - Radiation Protection, NCSR ' DEMOKRITOS' , P.O. Box 60228, 15310 Aghia Paraskevi (Greece); Gaveau, B. [MAPS, Universite Paris VI, 4 Place Jussieu, 75005 Paris (France); Jaekel, M. [Laboratoire de Physique Theorique, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Maillard, J. [MAPS, Universite Paris VI, 4 Place Jussieu, 75005 Paris (France); CNRS-IDRIS, Bt 506, BP167, 91403 Orsay (France); CNRS-IN2P3, 3 rue Michel Ange, 75794 Paris (France); Maurel, G. [Faculte de Medecine, Universite Paris VI, 27 rue de Chaligny, 75012 Paris (France); MAPS, Universite Paris VI, 4 Place Jussieu, 75005 Paris (France); Savva, P., E-mail: savvapan@ipta.demokritos.g [Institute of Nuclear Technology - Radiation Protection, NCSR ' DEMOKRITOS' , P.O. Box 60228, 15310 Aghia Paraskevi (Greece); Silva, J. [MAPS, Universite Paris VI, 4 Place Jussieu, 75005 Paris (France); Varvayanni, M.; Zisis, Th. [Institute of Nuclear Technology - Radiation Protection, NCSR ' DEMOKRITOS' , P.O. Box 60228, 15310 Aghia Paraskevi (Greece)

    2009-11-15

    In order to accurately simulate Accelerator Driven Systems (ADS), the utilization of at least two computational tools is necessary (the thermal-hydraulic problem is not considered in the frame of this work), namely: (a) A High Energy Physics (HEP) code system dealing with the 'Accelerator part' of the installation, i.e. the computation of the spectrum, intensity and spatial distribution of the neutrons source created by (p, n) reactions of a proton beam on a target and (b) a neutronics code system, handling the 'Reactor part' of the installation, i.e. criticality calculations, neutron transport, fuel burn-up and fission products evolution. In the present work, a single computational tool, aiming to analyze an ADS in its integrity and also able to perform core analysis for a conventional fission reactor, is proposed. The code is based on the well qualified HEP code GEANT (version 3), transformed to perform criticality calculations. The performance of the code is tested against two qualified neutronics code systems, the diffusion/transport SCALE-CITATION code system and the Monte Carlo TRIPOLI code, in the case of a research reactor core analysis. A satisfactory agreement was exhibited by the three codes.

  19. Predicting Porosity and Permeability for the Canyon Formation, SACROC Unit (Kelly-Snyder Field), Using the Geologic Analysis via Maximum Likelihood System

    International Nuclear Information System (INIS)

    , with high vertical resolution, could be generated for many wells. This procedure permits to populate any well location with core-scale estimates of P and P and rock types facilitating the application of geostatistical characterization methods. The first step procedure was to discriminate rock types of similar depositional environment and/or reservoir quality (RQ) using a specific clustering technique. The approach implemented utilized a model-based, probabilistic clustering analysis procedure called GAMLS1,2,3,4 (Geologic Analysis via Maximum Likelihood System) which is based on maximum likelihood principles. During clustering, samples (data at each digitized depth from each well) are probabilistically assigned to a previously specified number of clusters with a fractional probability that varies between zero and one

  20. Derivation of landslide-triggering thresholds by Monte Carlo simulation and ROC analysis

    Science.gov (United States)

    Peres, David Johnny; Cancelliere, Antonino

    2015-04-01

    Rainfall thresholds of landslide-triggering are useful in early warning systems to be implemented in prone areas. Direct statistical analysis of historical records of rainfall and landslide data presents different shortcomings typically due to incompleteness of landslide historical archives, imprecise knowledge of the triggering instants, unavailability of a rain gauge located near the landslides, etc. In this work, a Monte Carlo approach to derive and evaluate landslide triggering thresholds is presented. Such an approach contributes to overcome some of the above mentioned shortcomings of direct empirical analysis of observed data. The proposed Monte Carlo framework consists in the combination of a rainfall stochastic model with hydrological and slope-stability model. Specifically, 1000-years long hourly synthetic rainfall and related slope stability factor of safety data are generated by coupling the Neyman-Scott rectangular pulses model with the TRIGRS unsaturated model (Baum et al., 2008) and a linear-reservoir water table recession model. Triggering and non-triggering rainfall events are then distinguished and analyzed to derive stochastic-input physically based thresholds that optimize the trade-off between correct and wrong predictions. For this purpose, receiver operating characteristic (ROC) indices are used. An application of the method to the highly landslide-prone area of the Peloritani mountains in north-eastern Sicily (Italy) is carried out. A threshold for the area is derived and successfully validated by comparison with thresholds proposed by other researchers. Moreover, the uncertainty in threshold derivation due to variability of rainfall intensity within events and to antecedent rainfall is investigated. Results indicate that variability of intensity during rainfall events influences significantly rainfall intensity and duration associated with landslide triggering. A representation of rainfall as constant-intensity hyetographs globally leads to

  1. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Science.gov (United States)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  2. Criticality analysis of thermal reactors for two energy groups applying Monte Carlo and neutron Albedo method

    International Nuclear Information System (INIS)

    The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)

  3. On the feasibility of a homogenised multi-group Monte Carlo method in reactor analysis

    International Nuclear Information System (INIS)

    The use of homogenised multi-group cross sections to speed up Monte Carlo calculation has been studied to some extent, but the method is not widely implemented in modern calculation codes. This paper presents a calculation scheme in which homogenised material parameters are generated using the PSG continuous-energy Monte Carlo reactor physics code and used by MORA, a new full-core Monte Carlo code entirely based on homogenisation. The theory of homogenisation and its implementation in the Monte Carlo method are briefly introduced. The PSG-MORA calculation scheme is put to practice in two fundamentally different test cases: a small sodium-cooled fast reactor (JOYO) and a large PWR core. It is shown that the homogenisation results in a dramatic increase in efficiency. The results are in a reasonably good agreement with reference PSG and MCNP5 calculations, although fission source convergence becomes a problem in the PWR test case. (authors)

  4. Experience with Monte Carlo variance reduction using adjoint solutions in HYPER neutronics analysis

    International Nuclear Information System (INIS)

    The variance reduction techniques using adjoint solutions are applied to the Monte Carlo calculation of the HYPER(HYbrid Power Extraction Reactor) core neutronics. The applied variance reduction techniques are the geometry splitting and the weight windows. The weight bounds and the cell importance needed for these techniques are generated from an adjoint discrete ordinate calculation by the two-dimensional TWODANT code. The flux distribution variances of the Monte Carlo calculations by these variance reduction techniques are compared with the results of the standard Monte Carlo calculations. It is shown that the variance reduction techniques using adjoint solutions to the HYPER core neutronics result in a decrease in the efficiency of the Monte Carlo calculation

  5. Analysis of some splitting and roulette algorithms in shield calculations by the Monte Carlo method

    International Nuclear Information System (INIS)

    Different schemes of using the splitting and roulette methods in calculation of radiation transport in nuclear facility shields by the Monte Carlo method are considered. Efficiency of the considered schemes is estimated on the example of test calculations

  6. MULTI-KENO: a Monte Carlo code for criticality safety analysis

    International Nuclear Information System (INIS)

    Modifying the Monte Carlo code KENO-IV, the MULTI-KENO code was developed for criticality safety analysis. The following functions were added to the code; (1) to divide a system into many sub-systems named super boxes where the size of box types in each super box can be selected independently, (2) to output graphical view of a system for examining geometrical input data, (3) to solve fixed source problems, (4) to permit intersection of core boundaries and inner geometries, (5) to output ANISN type neutron balance table. With the above function (1), many cases which had to be applied a general geometry option of KENO-IV, became to be treated as box type geometry. In such a case, input data became simpler and required computer time became shorter than those of KENO-IV. This code is now available for the FACOM-M200 computer and the CDC 6600 computer. This report is a computer code manual for MULTI-KENO. (author)

  7. Markov chain Monte Carlo analysis to constrain dark matter properties with directional detection

    International Nuclear Information System (INIS)

    Directional detection is a promising dark matter search strategy. Indeed, weakly interacting massive particle (WIMP)-induced recoils would present a direction dependence toward the Cygnus constellation, while background-induced recoils exhibit an isotropic distribution in the Galactic rest frame. Taking advantage of these characteristic features, and even in the presence of a sizeable background, it has recently been shown that data from forthcoming directional detectors could lead either to a competitive exclusion or to a conclusive discovery, depending on the value of the WIMP-nucleon cross section. However, it is possible to further exploit these upcoming data by using the strong dependence of the WIMP signal with: the WIMP mass and the local WIMP velocity distribution. Using a Markov chain Monte Carlo analysis of recoil events, we show for the first time the possibility to constrain the unknown WIMP parameters, both from particle physics (mass and cross section) and Galactic halo (velocity dispersion along the three axis), leading to an identification of non-baryonic dark matter.

  8. Monte Carlo burnup analysis code development and application to an incore thermionic space nuclear power system

    International Nuclear Information System (INIS)

    In the design of the incore thermionic reactor system developed under the Advanced Thermionic Initiative (ATI), the fuel is highly enriched uranium dioxide and the moderating medium is zirconium hydride. The traditional burnup and fuel depletion analysis codes have been found to be inadequate for these calculations, largely because of the material and geometry modeled and because the neutron spectra assumed for the codes such as LEOPARD and ORIGEN do not even closely fit that for a small, thermal reactor using ZrH as moderator. More sophisticated codes such as the transport lattice type code WIMS often lack some materials, such as ZrH. Thus a new method which could accurately calculate the neutron spectrum and the appropriate reaction rates within the fuel element is needed. The method developed utilizes and interconnects the accuracy of the Monte Carlo Neutron/Photon (MCNP) method to calculate reaction rates for the important isotopes, and a time dependent depletion routine to calculate the temporal effects on isotope concentrations. This effort required the modification of MCNP itself to perform the additional task of accomplishing burnup calculations. The modified version called, MCNPBURN, evolved to be a general dual purpose code which can be used for standard calculations as well as for burn-up

  9. Monte Carlo analysis of the MEGA microlensing events towards M31

    CERN Document Server

    Ingrosso, G; De Paolis, F; Jetzer, P; Nucita, A A; Strafella, F; Jetzer, Ph.

    2005-01-01

    We perform an analytical study and a Monte Carlo (MC) analysis of the main features for microlensing events in pixel lensing observations towards M31. Our main aim is to investigate the lens nature and location of the 14 candidate events found by the MEGA collaboration. Assuming a reference model for the mass distribution in M31 and the standard model for our galaxy, we estimate the MACHO-to-self lensing probability and the event time duration towards M31. Reproducing the MEGA observing conditions, as a result we get the MC event number density distribution as a function of the event full-width half-maximum duration $t_{1/2}$ and the magnitude at maximum $R_{\\mathrm {max}}$. For a MACHO mass of $0.5 M_{\\odot}$ we find typical values of $t_{1/2} \\simeq 20$ day and $R_{\\mathrm {max}} \\simeq 22$, for both MACHO-lensing and self-lensing events occurring beyond about 10 arcminutes from the M31 center. A comparison of the observed features ($t_{1/2}$ and $R_{\\mathrm {max}}$) with our MC results shows that for a MAC...

  10. The Null Space Monte Carlo Uncertainty Analysis of Heterogeneity for Preferential Flow Simulation

    Science.gov (United States)

    Ghasemizade, M.; Radny, D.; Schirmer, M.

    2014-12-01

    Preferential flow paths can have a huge impact on the amount and time of runoff generation, particularly in areas where subsurface flow dominates this process. In order to simulate preferential flow mechanisms, many different approaches have been suggested. However, the efficiency of such approaches are rarely investigated in a predictive sense. The main reason is that the models which are used to simulate preferential flows require many parameters. This can lead to a dramatic increase of model run times, especially in the context of highly nonlinear models which themselves are demanding. We attempted in this research to simulate the daily recharge values of a weighing lysimeter, including preferential flows, with the 3-D physically based model HydroGeoSphere. To accomplish that, we used the matrix pore concept with varying hydraulic conductivities within the lysimeter to represent heterogeneity. It was assumed that spatially correlated heterogeneity is the main driver of triggering preferential flow paths. In order to capture the spatial distribution of hydraulic conductivity values we used pilot points and geostatistical model structures. Since hydraulic conductivity values at each pilot point are functioning as parameters, the model is a highly parameterized one. Due to this fact, we used the robust and newly developed method of null space Monte Carlo for analyzing the uncertainty of the model outputs. Results of the uncertainty analysis show that the method of pilot points is reliable in order to represent preferential flow paths.

  11. Benchmark analysis of TRIGA mark II reactivity experiment using a continuous energy Monte Carlo code MCNP

    International Nuclear Information System (INIS)

    The benchmark analysis of reactivity experiments in the TRIGA-II core at the Musashi Institute of Technology Research Reactor (Musashi reactor; 100 kW) was performed by a three-dimensional continuous-energy Monte Carlo code MCNP4A. The reactivity worth and integral reactivity curves of the control rods as well as the reactivity worth distributions of fuel and graphite elements were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated values of integral reactivity curves of the control rods were in agreement with the experimental data obtained by the period method. The integral worth measured by the rod drop method was also consistent with the calculation. The calculated values of the fuel and the graphite element worth distributions were consistent with the measured ones within the statistical error estimates. These results showed that the exact core configuration including the control rod positions to reproduce the fission source distribution in the experiment must be introduced into the calculation core for obtaining the precise solution. It can be concluded that our simulation model of the TRIGA-II core is precise enough to reproduce the control rod worth, fuel and graphite elements reactivity worth distributions. (author)

  12. The use of Monte Carlo analysis for exposure assessment of an estuarine food web

    Energy Technology Data Exchange (ETDEWEB)

    Iannuzzi, T.J.; Shear, N.M.; Harrington, N.W.; Henning, M.H. [McLaren/Hart Environmental Engineering Corp., Portland, ME (United States). ChemRisk Div.

    1995-12-31

    Despite apparent agreement within the scientific community that probabilistic methods of analysis offer substantially more informative exposure predictions than those offered by the traditional point estimate approach, few risk assessments conducted or approved by state and federal regulatory agencies have used probabilistic methods. Among the likely deterrents to application of probabilistic methods to ecological risk assessment is the absence of ``standard`` data distributions that are considered applicable to most conditions for a given ecological receptor. Indeed, point estimates of ecological exposure factor values for a limited number of wildlife receptors have only recently been published. The Monte Carlo method of probabilistic modeling has received increasing support as a promising technique for characterizing uncertainty and variation in estimates of exposure to environmental contaminants. An evaluation of literature on the behavior, physiology, and ecology of estuarine organisms was conducted in order to identify those variables that most strongly influence uptake of xenobiotic chemicals from sediments, water and food sources. The ranges, central tendencies, and distributions of several key parameter values for polychaetes (Nereis sp.), mummichog (Fundulus heteroclitus), blue crab (Callinectes sapidus), and striped bass (Morone saxatilis) in east coast estuaries were identified. Understanding the variation in such factors, which include feeding rate, growth rate, feeding range, excretion rate, respiration rate, body weight, lipid content, food assimilation efficiency, and chemical assimilation efficiency, is critical to the understanding the mechanisms that control the uptake of xenobiotic chemicals in aquatic organisms, and to the ability to estimate bioaccumulation from chemical exposures in the aquatic environment.

  13. Application of the Monte Carlo thermal design analysis to evaluate uncertainties of the PWR core using the THALES subchannel code

    International Nuclear Information System (INIS)

    In order to maintain the safety of the reactor core, the minimum DNBR (Departure from Nucleate Boiling Ratio) in the PWR (Pressurized-Water Reactor) core remains higher than the DNBR limit during Condition I and II events. Therefore, it is important to adequately evaluate the thermal performance of the PWR core. To realistically evaluate the relationship among the uncertainties and reduce the conservatism resulting from the unknown phenomena, the Monte Carlo method is being used in many areas requiring the statistical approach. Especially, the Monte Carlo method is drawing attention as the method for the evaluation of the thermal performance of the PWR core. For the best estimate evaluation of the uncertainties in the PWR core, KEPCO Nuclear Fuel (hereinafter KEPCO NF) has been developing the thermal design analysis based on the Monte Carlo method. For the Monte Carlo thermal design analysis, various studies are conducted as follows. To generate the Gaussian random numbers, Gaussian random number generators are investigated. In this paper, Box-Muller, Polar, GRAND, and Ziggurat method are briefly reviewed. The random numbers are generated on the basis of the nominal value and uncertainty of the parameter. If the normal distribution is acceptable at 5% significance level through the normality tests, the random numbers are used for the Monte Carlo thermal design analysis. Using the subchannel code THALES (Thermal Hydraulic AnaLyzer for Enhanced Simulation of core) developed by KEPCO NF, the subchannel analyses are carried out considering the core operating parameters randomized, and then DNBR distribution is derived. Finally, if the DNBR distribution is statistically combined with the uncertainties of the other parameters, the DNBRT distribution can be obtained. From the DNBRT distribution, the DNBR limit is determined to avoid DNB (Departure from Nucleate Boiling) at a 95% probability at a 95% confidence level. Through the example calculation, it is verified that

  14. The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis

    International Nuclear Information System (INIS)

    A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes

  15. Study of the quantitative analysis approach of maintenance by the Monte Carlo simulation method

    International Nuclear Information System (INIS)

    This study is examination of the quantitative valuation by Monte Carlo simulation method of maintenance activities of a nuclear power plant. Therefore, the concept of the quantitative valuation of maintenance that examination was advanced in the Japan Society of Maintenology and International Institute of Universality (IUU) was arranged. Basis examination for quantitative valuation of maintenance was carried out at simple feed water system, by Monte Carlo simulation method. (author)

  16. On the likelihood of forests

    Science.gov (United States)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  17. Analysis of void coefficient in fast spectrum BWR core with Monte Carlo code 'MVP'

    International Nuclear Information System (INIS)

    An innovative large BWR core concept has been proposed for aiming at fuel breeding as well as negative void reactivity coefficient. The core consists of two types of MOX fuel assemblies. One is a triangular tight lattice bundle 1.6 m in active core height and the other is the same bundle 0.8 m. The ratio of flow area to fuel area of the bundle is set at about 0.5 in order to increase breeding ratio. A neutron-streaming channel that consists of a cavity-can containing helium gas and a flow gap between the cavity-can and the channel box is located above each short bundle. It will decrease void reactivity coefficient by enhancing neutron leakage from the core when the void fraction is increased in the flow gap. A core composed of tight lattice bundles provides a much harder neutron spectrum than that of conventional BWRs but a slightly softer one than that of typical FBRs. The cavity-can and the flow gap will cause a steep gradient of neutron flux. The neutronics for such a complicated core structure could not be properly analyzed by conventional analysis methods. In particular, the analysis of void reactivity coefficient requires a sophisticated method because it deals with a small change in core composition. In the analysis of the void reactivity coefficient, we adopted a three-dimensional Monte Carlo code 'MVP', which has been developed by JAERI and has many advantages such as an easy input form for lattice structures, a short run time and a continuous neutron energy method. The continuous neutron energy method is important for the analysis of this core because fission reactions occur mainly in the resonance energy region, where the evaluation of accurate cross sections is difficult with conventional methods. The library used is JENDL-3.2. The multi-layer structure of lattices is also essential for the analysis because its hard spectrum and relatively long neutron mean free path require a modeling for the full core with a lot of bundles. The analysis indicates that

  18. Effects of stochastic noise on a three-dimensional Monte Carlo depletion analysis of the H.B. Robinson reactor

    International Nuclear Information System (INIS)

    Monte Carlo depletion calculations for nuclear reactors are affected by the presence of stochastic noise in the local flux estimates produced during the calculation. The effects of this random noise and its propagation between timesteps during long depletion simulations are not well understood. To improve this understanding, a series of Monte Carlo depletion simulations have been conducted for a 3-D, eighth-core model of the H.B. Robinson PWR. The studies were performed by using the in-line depletion capability of the MC21 Monte Carlo code to produce multiple independent depletion simulations. Global and local results from each simulation are compared in order to determine the variance among the different depletion realizations. These comparisons indicate that global quantities, such as eigenvalue (keff), do not tend to diverge among the independent depletion calculations. However, local quantities, such as fuel concentration, can deviate wildly between independent depletion realizations, especially at high burnup levels. Analysis and discussion of the results from the study are provided, along with several new observations regarding the propagation of random noise during Monte Carlo depletion calculations. (author)

  19. Analysis of possibility to apply new mathematical methods (R-function theory) in Monte Carlo simulation of complex geometry

    International Nuclear Information System (INIS)

    This analysis is part of the report on ' Implementation of geometry module of 05R code in another Monte Carlo code', chapter 6.0: establishment of future activity related to geometry in Monte Carlo method. The introduction points out some problems in solving complex three-dimensional models which induce the need for developing more efficient geometry modules in Monte Carlo calculations. Second part include formulation of the problem and geometry module. Two fundamental questions to be solved are defined: (1) for a given point, it is necessary to determine material region or boundary where it belongs, and (2) for a given direction, all cross section points with material regions should be determined. Third part deals with possible connection with Monte Carlo calculations for computer simulation of geometry objects. R-function theory enables creation of geometry module base on the same logic (complex regions are constructed by elementary regions sets operations) as well as construction geometry codes. R-functions can efficiently replace functions of three-value logic in all significant models. They are even more appropriate for application since three-value logic is not typical for digital computers which operate in two-value logic. This shows that there is a need for work in this field. It is shown that there is a possibility to develop interactive code for computer modeling of geometry objects in parallel with development of geometry module

  20. Performance Analysis of Korean Liquid metal type TBM based on Monte Carlo code

    International Nuclear Information System (INIS)

    The objective of this project is to analyze a nuclear performance of the Korean HCML(Helium Cooled Molten Lithium) TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean HCML ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for analyzing the neutronic design of the Korean HCML TBM and the DEMO fusion blanket, and improving the nuclear performances. The results of the numerical experiments performed in this project will be utilized further for a design optimization of the Korean HCML TBM. In this project, Monte Carlo transport calculations for evaluating TBR (Tritium Breeding Ratio) and EMF (Energy Multiplication factor) were conducted to analyze a nuclear performance of the Korean HCML TBM. The activation characteristics and shielding performances for the Korean HCML TBM were analyzed using ORIGEN and MCCARD. We proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket, which was applied to the blanket analysis of a DEMO fusion reactor. In the results, the TBR of the Korean HCML ITER TBM is 0.1352 and the EMF is 1.362. Taking into account a limitation for the Li amount in ITER TBM, it is expected that tritium self-sufficiency condition can be satisfied through a change of the Li quantity and enrichment. In the results of activation and shielding analysis, the activity drops to 1.5% of the initial value and the decay heat drops to 0.02% of the initial amount after 10 years from plasma shutdown

  1. Uncertainty analysis in the simulation of an HPGe detector using the Monte Carlo Code MCNP5

    International Nuclear Information System (INIS)

    A gamma spectrometer including an HPGe detector is commonly used for environmental radioactivity measurements. Many works have been focused on the simulation of the HPGe detector using Monte Carlo codes such as MCNP5. However, the simulation of this kind of detectors presents important difficulties due to the lack of information from manufacturers and due to loss of intrinsic properties in aging detectors. Some parameters such as the active volume or the Ge dead layer thickness are many times unknown and are estimated during simulations. In this work, a detailed model of an HPGe detector and a petri dish containing a certified gamma source has been done. The certified gamma source contains nuclides to cover the energy range between 50 and 1800 keV. As a result of the simulation, the Pulse Height Distribution (PHD) is obtained and the efficiency curve can be calculated from net peak areas and taking into account the certified activity of the source. In order to avoid errors due to the net area calculation, the simulated PHD is treated using the GammaVision software. On the other hand, it is proposed to use the Noether-Wilks formula to do an uncertainty analysis of model with the main goal of determining the efficiency curve of this detector and its associated uncertainty. The uncertainty analysis has been focused on dead layer thickness at different positions of the crystal. Results confirm the important role of the dead layer thickness in the low energy range of the efficiency curve. In the high energy range (from 300 to 1800 keV) the main contribution to the absolute uncertainty is due to variations in the active volume. (author)

  2. Monte Carlo shielding comparative analysis applied to TRIGA HEU and LEU spent fuel transport

    Energy Technology Data Exchange (ETDEWEB)

    Margeanu, C. A.; Iorgulis, C. [Reactor Physics, Nuclear Fuel Performances and Nuclear Safety Department, Institute for Nuclear Research Pitesti, P.O Box 78, Pitesti (Romania); Margeanu, S. [Radiation Protection Department, Institute for Nuclear Research Pitesti, Pitesti (Romania); Barbos, D. [TRIGA Research Reactor Department, Institute for Nuclear Research Pitesti, Pitesti (Romania)

    2009-07-01

    The paper is a comparative study of LEU (low uranium enrichment) and HEU (highly enriched uranium) fuel utilization effects for the shielding analysis during spent fuel transport. A comparison against the measured data for HEU spent fuel, available from the last stage of spent fuel repatriation fulfilled in the summer of 2008, is also presented. All geometrical and material data for the shipping cask were considered according to NAC-LWT Cask approved model. The shielding analysis estimates radiation doses to shipping cask wall surface, and in air at 1 m and 2 m, respectively, from the cask by means of 3-dimensional Monte Carlo MORSE-SGC code. Before loading into the shipping cask TRIGA spent fuel source terms and spent fuel parameters have been obtained by means of ORIGEN-S code. Both codes are included in ORNL's SCALE 5 programs package. {sup 60}Co radioactivity is important for HEU spent fuel; actinides contribution to total fuel radioactivity is low. For LEU spent fuel {sup 60}Co radioactivity is insignificant; actinides contribution to total fuel radioactivity is high. Dose rates for both HEU and LEU fuel contents are below regulatory limits, LEU spent fuel photon dose rates being greater than the HEU ones. The comparison between HEU spent fuel theoretical and measured dose rates in selected measuring points shows a good agreement, the calculated values being greater than the measured ones both to cask wall surface (about 34% relative difference) and in air at 1 m distance from the cask surface (about 15% relative difference). (authors)

  3. An Evaluation of the Adjoint Flux Using the Collision Probability Method for the Hybrid Monte Carlo Radiation Shielding Analysis

    International Nuclear Information System (INIS)

    It is noted that the analog Monte Carlo method has low calculation efficiency at deep penetration problems such as radiation shielding analysis. In order to increase the calculation efficiency, variance reduction techniques have been introduced and applied for the shielding calculation. To optimize the variance reduction technique, the hybrid Monte Carlo method was introduced. For the determination of the parameters using the hybrid Monte Carlo method, the adjoint flux should be calculated by the deterministic methods. In this study, the collision probability method is applied to calculate adjoint flux. The solution of integration transport equation in the collision probability method is modified to calculate the adjoint flux approximately even for complex and arbitrary geometries. For the calculation, C++ program is developed. By using the calculated adjoint flux, importance parameters of each cell in shielding material are determined and used for variance reduction of transport calculation. In order to evaluate calculation efficiency with the proposed method, shielding calculations are performed with MCNPX 2.7. In this study, a method to calculate the adjoint flux in using the Monte Carlo variance reduction was proposed to improve Monte Carlo calculation efficiency of thick shielding problem. The importance parameter for each cell of shielding material is determined by calculating adjoint flux with the modified collision probability method. In order to calculate adjoint flux with the proposed method, C++ program is developed. The results show that the proposed method can efficiently increase the FOM of transport calculation. It is expected that the proposed method can be utilize for the calculation efficiency in thick shielding calculation

  4. Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope

    Science.gov (United States)

    Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao

    2015-10-01

    X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through

  5. Monte Carlo transport calculations and analysis for reactor pressure vessel neutron fluence

    International Nuclear Information System (INIS)

    The application of Monte Carlo methods for reactor pressure vessel (RPV) neutron fluence calculations is examined. As many commercial nuclear light water reactors approach the end of their design lifetime, it is of great consequence that reactor operators and regulators be able to characterize the structural integrity of the RPV accurately for financial reasons, as well as safety reasons, due to the possibility of plant life extensions. The Monte Carlo method, which offers explicit three-dimensional geometric representation and continuous energy and angular simulation, is well suited for this task. A model of the Three Mile Island unit 1 reactor is presented for determination of RPV fluence; Monte Carlo (MCNP) and deterministic (DORT) results are compared for this application; and numerous issues related to performing these calculations are examined. Synthesized three-dimensional deterministic models are observed to produce results that are comparable to those of Monte Carlo methods, provided the two methods utilize the same cross-section libraries. Continuous energy Monte Carlo methods are shown to predict more (15 to 20%) high-energy neutrons in the RPV than deterministic methods

  6. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    International Nuclear Information System (INIS)

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  7. Monte Carlo Neutronics and Thermal Hydraulics Analysis of Reactor Cores with Multilevel Grids

    Science.gov (United States)

    Bernnat, W.; Mattes, M.; Guilliard, N.; Lapins, J.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2014-06-01

    Power reactors are composed of assemblies with fuel pin lattices or other repeated structures with several grid levels, which can be modeled in detail by Monte Carlo neutronics codes such as MCNP6 using corresponding lattice options, even for large cores. Except for fresh cores at beginning of life, there is a varying material distribution due to burnup in the different fuel pins. Additionally, for power states the fuel and moderator temperatures and moderator densities vary according to the power distribution and cooling conditions. Therefore, a coupling of the neutronics code with a thermal hydraulics code is necessary. Depending on the level of detail of the analysis, a very large number of cells with different materials and temperatures must be regarded. The assignment of different material properties to all elements of a multilevel grid is very elaborate and may exceed program limits if the standard input procedure is used. Therefore, an internal assignment is used which overrides uniform input parameters. The temperature dependency of continuous energy cross sections, probability tables for the unresolved resonance region and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. The method is applied with MCNP6 and proven for several full core reactor models. For the coupling of MCNP6 with thermal hydraulics appropriate interfaces were developed for the GRS system code ATHLET for liquid coolant and the IKE thermal hydraulics code ATTICA-3D for gaseous coolant. Examples will be shown for different applications for PWRs with square and hexagonal lattices, fast reactors (SFR) with hexagonal lattices and HTRs with pebble bed and prismatic lattices.

  8. Dynamic fault tree analysis using Monte Carlo simulation in probabilistic safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Durga Rao, K. [Bhabha Atomic Research Centre, Mumbai (India)], E-mail: durga_k_rao@yahoo.com; Gopika, V.; Sanyasi Rao, V.V.S.; Kushwaha, H.S. [Bhabha Atomic Research Centre, Mumbai (India); Verma, A.K.; Srividya, A. [Indian Institute of Technology Bombay, Mumbai (India)

    2009-04-15

    Traditional fault tree (FT) analysis is widely used for reliability and safety assessment of complex and critical engineering systems. The behavior of components of complex systems and their interactions such as sequence- and functional-dependent failures, spares and dynamic redundancy management, and priority of failure events cannot be adequately captured by traditional FTs. Dynamic fault tree (DFT) extend traditional FT by defining additional gates called dynamic gates to model these complex interactions. Markov models are used in solving dynamic gates. However, state space becomes too large for calculation with Markov models when the number of gate inputs increases. In addition, Markov model is applicable for only exponential failure and repair distributions. Modeling test and maintenance information on spare components is also very difficult. To address these difficulties, Monte Carlo simulation-based approach is used in this work to solve dynamic gates. The approach is first applied to a problem available in the literature which is having non-repairable components. The obtained results are in good agreement with those in literature. The approach is later applied to a simplified scheme of electrical power supply system of nuclear power plant (NPP), which is a complex repairable system having tested and maintained spares. The results obtained using this approach are in good agreement with those obtained using analytical approach. In addition to point estimates of reliability measures, failure time, and repair time distributions are also obtained from simulation. Finally a case study on reactor regulation system (RRS) of NPP is carried out to demonstrate the application of simulation-based DFT approach to large-scale problems.

  9. Dynamic fault tree analysis using Monte Carlo simulation in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Traditional fault tree (FT) analysis is widely used for reliability and safety assessment of complex and critical engineering systems. The behavior of components of complex systems and their interactions such as sequence- and functional-dependent failures, spares and dynamic redundancy management, and priority of failure events cannot be adequately captured by traditional FTs. Dynamic fault tree (DFT) extend traditional FT by defining additional gates called dynamic gates to model these complex interactions. Markov models are used in solving dynamic gates. However, state space becomes too large for calculation with Markov models when the number of gate inputs increases. In addition, Markov model is applicable for only exponential failure and repair distributions. Modeling test and maintenance information on spare components is also very difficult. To address these difficulties, Monte Carlo simulation-based approach is used in this work to solve dynamic gates. The approach is first applied to a problem available in the literature which is having non-repairable components. The obtained results are in good agreement with those in literature. The approach is later applied to a simplified scheme of electrical power supply system of nuclear power plant (NPP), which is a complex repairable system having tested and maintained spares. The results obtained using this approach are in good agreement with those obtained using analytical approach. In addition to point estimates of reliability measures, failure time, and repair time distributions are also obtained from simulation. Finally a case study on reactor regulation system (RRS) of NPP is carried out to demonstrate the application of simulation-based DFT approach to large-scale problems

  10. Romania Monte Carlo Methods Application to CANDU Spent Fuel Comparative Analysis

    International Nuclear Information System (INIS)

    Romania has a single NPP at Cernavoda with 5 PHWR reactors of CANDU6 type of 705 MW(e) each, with Cernavoda Unit1, operational starting from December 1996, Unit2 under construction while the remaining Unit3-5 is being conserved. The nuclear energy world wide development is accompanied by huge quantities of spent nuclear fuel accumulation. Having in view the possible impact upon population and environment, in all activities associated to nuclear fuel cycle, namely transportation, storage, reprocessing or disposal, the spent fuel characteristics must be well known. The paper aim is to apply Monte Carlo methods to CANDU spent fuel analysis, starting from the discharge moment, followed by spent fuel transport after a defined cooling period and finishing with the intermediate dry storage. As radiation source 3 CANDU fuels have been considered: standard 37 rods fuel bundle with natural UO2 and SEU fuels, and 43 rods fuel bundle with SEU fuel. After a criticality calculation using KENO-VI code, the criticality coefficient and the actinides and fission products concentrations are obtained. By using ORIGEN-S code, the photon source profiles are calculated and the spent fuel characteristics estimation is done. For the shielding calculations MORSE-SGC code has been used. Regarding to the spent fuel transport, the photon dose rates to the shipping cask wall and in air, at different distances from the cask, are estimated. The shielding calculation for the spent fuel intermediate dry storage is done and the photon dose rates at the storage basket wall (active element of the Cernavoda NPP intermediate dry storage) are obtained. A comparison between the 3 types of CANDU fuels is presented. (authors)

  11. Status of software for PGNAA bulk analysis by the Monte Carlo - Library Least-Squares (MCLLS) approach

    International Nuclear Information System (INIS)

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for about ten years on the Monte Carlo - Library Least-Squares (MCLLS) approach for treating the nonlinear inverse analysis problem for PGNAA bulk analysis. This approach consists essentially of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required libraries. These libraries are then used in the linear Library Least-Squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. The other libraries include all sources of background which includes: (1) gamma-rays emitted by the neutron source, (2) prompt gamma-rays produced in the analyzer construction materials, (3) natural gamma-rays from K-40 and the uranium and thorium decay chains, and (4) prompt and decay gamma-rays produced in the NaI detector by neutron activation. A number of unforeseen problems have arisen in pursuing this approach including: (1) the neutron activation of the most common detector (NaI) used in bulk analysis PGNAA systems, (2) the nonlinearity of this detector, and (3) difficulties in obtaining detector response functions for this (and other) detectors. These problems have been addressed by CEAR recently and have either been solved or are almost solved at the present time. Development of Monte Carlo simulation for all of the libraries has been finished except the prompt gamma-ray library from the activation of the NaI detector. Treatment for the coincidence schemes for Na and particularly I must be first determined to complete the Monte Carlo simulation of this last library. (author)

  12. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  13. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    International Nuclear Information System (INIS)

    Searching for new physics in rare B meson decays governed by b → s transitions, we perform a model-independent global fit of the short-distance couplings C7, C9, and C10 of the ΔB=1 effective field theory. We assume the standard-model set of b → sγ and b → sl+l- operators with real-valued Ci. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B→K*γ, B→K(*)l+l-, and Bs→μ+μ- decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit reveals a flipped-sign solution in addition to a standard-model-like solution for the couplings Ci. The two solutions are related

  14. Analysis of uncertainty quantification method by comparing Monte-Carlo method and Wilk's formula

    International Nuclear Information System (INIS)

    An analysis of the uncertainty quantification related to LBLOCA using the Monte-Carlo calculation has been performed and compared with the tolerance level determined by the Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LOCA phenomena were determined based on previous PIRT results and documentation during the BEMUSE project. Calulations were conducted on 3,500 cases within a 2-week CPU time on a 14-PC cluster system. The Monte-Carlo exercise shows that the 95% upper limit PCT value can be obtained well, with a 95% confidence level using the Wilks' formula, although we have to endure a 5% risk of PCT under-prediction. The results also show that the statistical fluctuation of the limit value using Wilks' first-order is as large as the uncertainty value itself. It is therefore desirable to increase the order of the Wilks' formula to be higher than the second-order to estimate the reliable safety margin of the design features. It is also shown that, with its ever increasing computational capability, the Monte-Carlo method is accessible for a nuclear power plant safety analysis within a realistic time frame.

  15. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    Science.gov (United States)

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators. PMID:25811254

  16. Statistical analysis for discrimination of prompt gamma ray peak induced by high energy neutron: Monte Carlo simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Moo-Sub; Jung, Joo-Young; Suh, Tae Suk [College of Medicine, Catholic University of Korea, Seoul (Korea, Republic of)

    2015-05-15

    The purpose of this research was the statistical analysis for discrimination of the prompt gamma ray peak induced by the 14.1 MeV neutron particles from spectra using Monte Carlo simulation. For the simulation, the information of the eighteen detector materials was used to simulate spectra by the neutron capture reaction. To the best of our knowledge, the results in this study are the first reported data regarding the peak discrimination of high energy prompt gamma ray using the many cases (the eighteen detector materials and the nine prompt gamma ray peaks). The reliable data based on the Monte Carlo method and statistical method with the identical conditions was deducted. Our results are important data in the PGAA study for the peak detection within actual experiments.

  17. Statistical analysis for discrimination of prompt gamma ray peak induced by high energy neutron: Monte Carlo simulation study

    International Nuclear Information System (INIS)

    The purpose of this research was the statistical analysis for discrimination of the prompt gamma ray peak induced by the 14.1 MeV neutron particles from spectra using Monte Carlo simulation. For the simulation, the information of the eighteen detector materials was used to simulate spectra by the neutron capture reaction. To the best of our knowledge, the results in this study are the first reported data regarding the peak discrimination of high energy prompt gamma ray using the many cases (the eighteen detector materials and the nine prompt gamma ray peaks). The reliable data based on the Monte Carlo method and statistical method with the identical conditions was deducted. Our results are important data in the PGAA study for the peak detection within actual experiments

  18. Performance analysis based on a Monte Carlo simulation of a liquid xenon PET detector

    International Nuclear Information System (INIS)

    Liquid xenon is a very attractive medium for position-sensitive gamma-ray detectors for a very wide range of applications, namely, in medical radionuclide imaging. Recently, the authors have proposed a liquid xenon detector for positron emission tomography (PET). In this paper, some aspects of the performance of a liquid xenon PET detector prototype were studied by means of Monte Carlo simulation

  19. Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods

    International Nuclear Information System (INIS)

    The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author)

  20. Maximum-likelihood absorption tomography

    International Nuclear Information System (INIS)

    Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)

  1. Statistical analysis for discrimination of prompt gamma ray peak induced by high energy neutron: Monte Carlo simulation study

    International Nuclear Information System (INIS)

    The purpose of this research is a statistical analysis for discrimination of prompt gamma ray peak induced by the 14.1 MeV neutron particles from spectra using Monte Carlo simulation. For the simulation, the information of 18 detector materials was used to simulate spectra by the neutron capture reaction. The discrimination of nine prompt gamma ray peaks from the simulation of each detector material was performed. We presented the several comparison indexes of energy resolution performance depending on the detector material using the simulation and statistics for the prompt gamma activation analysis. (author)

  2. A viable method for goodness-of-fit test in maximum likelihood fit

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng; GAO Yuan-Ning; HUO Lei

    2011-01-01

    A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood function if the efficiency function varies smoothly. We point out that the correlation coefficient can be estimated by the Monte Carlo technique. With the established method, two examples are given to illustrate the performance of the test statistic.

  3. On Russian Roulette Estimates for Bayesian Inference with Doubly-Intractable Likelihoods

    OpenAIRE

    Lyne, Anne-Marie; Girolami, Mark; Atchadé, Yves; Strathmann, Heiko; Simpson, Daniel

    2013-01-01

    A large number of statistical models are “doubly-intractable”: the likelihood normalising term, which is a function of the model parameters, is intractable, as well as the marginal likelihood (model evidence). This means that standard inference techniques to sample from the posterior, such as Markov chain Monte Carlo (MCMC), cannot be used. Examples include, but are not confined to, massive Gaussian Markov random fields, autologistic models and Exponential random graph models. A number of app...

  4. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  5. Obtaining reliable likelihood ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    2014-01-01

    programs - to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). Problem 1: Inconsistent LR tests due to asymmetric draws: This paper shows that when the estimated likelihood functions depend on standard deviations of mixed parameters this practice is very...... likely to cause misleading test results for the number of draws usually used today. The paper illustrates that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The main conclusion of this paper is...

  6. Small angle neutron scattering by unfolded proteins: analysis using Monte Carlo simulation and molecular mechanics

    International Nuclear Information System (INIS)

    Small Angle Neutron Scattering (SANS) experiments have been performed on highly unfolded phosphoglycerate kinase (PGK) obtained by denaturation in 4M guanidinium chloride. The data were initially interpreted using analytical models in which the scattering density associated with protein was represented as a Freely Jointed Chain (FJC) of contiguous spheres. We have recently developed from the same data a Monte Carlo simulation technique with experimental constraints for sampling the configurational distribution of various low resolution models, including FJC. In all these models the unfolded protein is pictured as a chain of N contiguous spheres were N is an independent parameter. The models differ, however, by the degree of interpenetration of neighbours higher than second order. Configurationally averaged scattering profiles coming from different models and different N are fitted to the data at very low q using the method described in a previous communication and the similarity of the model curve to the experiment is examined over the rest of the q range. With this method we have demonstrated that models incorporating an excluded volume condition reproduce markedly better the SANS profiles than FJC. The best agreement was obtained for an excluded volume chain model (EVC) of 82 spheres with hard core of 0.7, i.e. an interpenetration of 0.3. For PGK this corresponds to 5 aa/sphere. Sphere model configurations can then be used to generate configurations at atomic level using molecular mechanics. Different models of the local conformation of the polypeptide chain can thus be tested. Reconstruction of the scattering curve from atomic level configurations demonstrates that in the intermediate q region the SANS signal is sensitive to the overall phi/psi distribution of the protein, being an indicator of the presence or absence of native secondary structure. Our analysis demonstrates also that at high q the SANS signal is driven by associated solvent and counter ion cloud

  7. Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network

    Directory of Open Access Journals (Sweden)

    Klipp Edda

    2009-08-01

    Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can

  8. Evaluation of CASMO-3 and HELIOS for Fuel Assembly Analysis from Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Hyung Jin; Song, Jae Seung; Lee, Chung Chan

    2007-05-15

    This report presents a study comparing deterministic lattice physics calculations with Monte Carlo calculations for LWR fuel pin and assembly problems. The study has focused on comparing results from the lattice physics code CASMO-3 and HELIOS against those from the continuous-energy Monte Carlo code McCARD. The comparisons include k{sub inf}, isotopic number densities, and pin power distributions. The CASMO-3 and HELIOS calculations for the k{sub inf}'s of the LWR fuel pin problems show good agreement with McCARD within 956pcm and 658pcm, respectively. For the assembly problems with Gadolinia burnable poison rods, the largest difference between the k{sub inf}'s is 1463pcm with CASMO-3 and 1141pcm with HELIOS. RMS errors for the pin power distributions of CASMO-3 and HELIOS are within 1.3% and 1.5%, respectively.

  9. A numerical analysis of antithetic variates in Monte Carlo radiation transport with geometrical surface splitting

    International Nuclear Information System (INIS)

    A numerical study for effective implementation of the antithetic variates technique with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. The study is based on the theory of Monte Carlo errors where a set of coupled integral equations are solved for the first and second moments of the score and for the expected number of flights per particle history. Numerical results are obtained for particle transmission through an infinite homogeneous slab shield composed of an isotropically scattering medium. Two types of antithetic transformations are considered. The results indicate that the antithetic transformations always lead to reduction in variance and increase in efficiency provided optimal antithetic parameters are chosen. A substantial gain in efficiency is obtained by incorporating antithetic transformations in rule of thumb splitting. The advantage gained for thick slabs (∼20 mfp) with low scattering probability (0.1-0.5) is attractively large . (author). 27 refs., 9 tabs

  10. Regeneration and Fixed-Width Analysis of Markov Chain Monte Carlo Algorithms

    Science.gov (United States)

    Latuszynski, Krzysztof

    2009-07-01

    In the thesis we take the split chain approach to analyzing Markov chains and use it to establish fixed-width results for estimators obtained via Markov chain Monte Carlo procedures (MCMC). Theoretical results include necessary and sufficient conditions in terms of regeneration for central limit theorems for ergodic Markov chains and a regenerative proof of a CLT version for uniformly ergodic Markov chains with E_{π}f^2< infty. To obtain asymptotic confidence intervals for MCMC estimators, strongly consistent estimators of the asymptotic variance are essential. We relax assumptions required to obtain such estimators. Moreover, under a drift condition, nonasymptotic fixed-width results for MCMC estimators for a general state space setting (not necessarily compact) and not necessarily bounded target function f are obtained. The last chapter is devoted to the idea of adaptive Monte Carlo simulation and provides convergence results and law of large numbers for adaptive procedures under path-stability condition for transition kernels.

  11. Likelihood Functions for Galaxy Cluster Surveys

    CERN Document Server

    Holder, G

    2006-01-01

    Galaxy cluster surveys offer great promise for measuring cosmological parameters, but survey analysis methods have not been widely studied. Using methods developed decades ago for galaxy clustering studies, it is shown that nearly exact likelihood functions can be written down for galaxy cluster surveys. The sparse sampling of the density field by galaxy clusters allows simplifications that are not possible for galaxy surveys. An application to counts in cells is explicitly tested using cluster catalogs from numerical simulations and it is found that the calculated probability distributions are very accurate at masses above several times 10^{14}h^{-1} solar masses at z=0 and lower masses at higher redshift.

  12. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    a series of Monte Carlo experiments, evidence suggested four main conclusions: (a) efficiency increased when the true variance-covariance matrix became diagonal, (b) EM was more robust to the curse of dimensionality in regard to efficiency and estimation time, (c) EM did not recover the true scale...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time and that...... EM has more difficulty in recovering the true scale of the coefficients. In this paper, the analysis is extended from cross-sectional data to the less volatile case of panel data to explore the effect on the relative performance of the methods with several realizations of the random coefficients. In...

  13. Monte Carlo Renormalization Group Analysis of Lattice $\\phi^4$ Model in $D=3,4$

    OpenAIRE

    Itakura, M

    1999-01-01

    We present a simple, sophisticated method to capture renormalization group flow in Monte Carlo simulation, which provides important information of critical phenomena. We applied the method to $D=3,4$ lattice $\\phi^4$ model and obtained renormalization flow diagram which well reproduces theoretically predicted behavior of continuum $\\phi^4$ model. We also show that the method can be easily applied to much more complicated models, such as frustrated spin models.

  14. ANALYSIS OF NEIGHBORHOOD IMPACTS ARISING FROM IMPLEMENTATION OF SUPERMARKETS IN CITY OF SÃO CARLOS

    OpenAIRE

    Pedro Silveira Gonçalves Neto; José Augusto de Lollo

    2010-01-01

    The study included supermarkets of different sizes (small, medium and large - defined based on the area occupied by the project and volume of activity) located in São Carlos (São Paulo state, Brazil) to evaluate the influence of the size of the project impacts neighborhood generated by these supermarkets. It was considered the influence of factors like the location of enterprises, size of the building, and areas of influence contribute to the increased population density and change of use of ...

  15. Monte Carlo analysis of the terahertz difference frequency generation susceptibility in quantum cascade laser structures.

    Science.gov (United States)

    Jirauschek, Christian; Okeil, Hesham; Lugli, Paolo

    2015-01-26

    Based on self-consistent ensemble Monte Carlo simulations coupled to the optical field dynamics, we investigate the giant nonlinear susceptibility giving rise to terahertz difference frequency generation in quantum cascade laser structures. Specifically, the dependence on temperature, bias voltage and frequency is considered. It is shown that the optical nonlinearity is temperature insensitive and covers a broad spectral range, as required for widely tunable room temperature terahertz sources. The obtained results are consistent with available experimental data. PMID:25835923

  16. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    International Nuclear Information System (INIS)

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  17. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, A.; Sanchez, V. [Karlsruhe Inst. of Technology, Inst. for Neutron Physics and Reactor Technology, Herman-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Hoogenboom, J. E. [Delft Univ. of Technology, Faculty of Applied Sciences, Mekelweg 15, 2629 JB Delft (Netherlands)

    2012-07-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  18. Nonlinear Stochastic stability analysis of Wind Turbine Wings by Monte Carlo Simulations

    DEFF Research Database (Denmark)

    Larsen, Jesper Winther; Iwankiewiczb, R.; Nielsen, Søren R.K.

    2007-01-01

    under narrow-banded excitation, and it is shown that the qualitative behaviour of the strange attractor is very similar for the periodic and almost periodic responses, whereas the strange attractor for the chaotic case loses structure as the excitation becomes narrow-banded. Furthermore, the...... characteristic behaviour of the strange attractor is shown to be identifiable by the so-called information dimension. Due to the complexity of the coupled nonlinear structural system all analyses are carried out via Monte Carlo simulations....

  19. Recent developments in maximum likelihood estimation of MTMM models for categorical data

    Directory of Open Access Journals (Sweden)

    Minjeong eJeon

    2014-04-01

    Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.

  20. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running

  1. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphael Georges

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  2. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    Science.gov (United States)

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  3. An Analysis of the Nuclear Data Libraries' Impact on the Criticality Computations Performed using Monte Carlo Codes

    International Nuclear Information System (INIS)

    The major aim of this work is a sensitivity analysis related to the influence of the different nuclear data libraries on the k-infinity values and on the void coefficient estimations performed for various CANDU fuel projects, and on the simulations related to the replacement of the original stainless steel adjuster rods by cobalt assemblies in the CANDU reactor core. The computations are performed using the Monte Carlo transport codes MCNP5 and MONTEBURNS 1.0 for the actual, detailed geometry and material composition of the fuel bundles and reactivity devices. Some comparisons with deterministic and probabilistic codes involving the WIMS library are also presented

  4. Maximum Likelihood Factor Analysis of the Effects of Chronic Centrifugation on the Structural Development of the Musculoskeletal System of the Rat

    Science.gov (United States)

    Amtmann, E.; Kimura, T.; Oyama, J.; Doden, E.; Potulski, M.

    1979-01-01

    At the age of 30 days female Sprague-Dawley rats were placed on a 3.66 m radius centrifuge and subsequently exposed almost continuously for 810 days to either 2.76 or 4.15 G. An age-matched control group of rats was raised near the centrifuge facility at earth gravity. Three further control groups of rats were obtained from the animal colony and sacrificed at the age of 34, 72 and 102 days. A total of 16 variables were simultaneously factor analyzed by maximum-likelihood extraction routine and the factor loadings presented after-rotation to simple structure by a varimax rotation routine. The variables include the G-load, age, body mass, femoral length and cross-sectional area, inner and outer radii, density and strength at the mid-length of the femur, dry weight of gluteus medius, semimenbranosus and triceps surae muscles. Factor analyses on A) all controls, B) all controls and the 2.76 G group, and C) all controls and centrifuged animals, produced highly similar loading structures of three common factors which accounted for 74%, 68% and 68%. respectively, of the total variance. The 3 factors were interpreted as: 1. An age and size factor which stimulates the growth in length and diameter and increases the density and strength of the femur. This factor is positively correlated with G-load but is also active in the control animals living at earth gravity. 2. A growth inhibition factor which acts on body size, femoral length and on both the outer and inner radius at mid-length of the femur. This factor is intensified by centrifugation.

  5. Simulated Maximum Likelihood using Tilted Importance Sampling

    OpenAIRE

    Christian N. Brinch

    2008-01-01

    Abstract: This paper develops the important distinction between tilted and simple importance sampling as methods for simulating likelihood functions for use in simulated maximum likelihood. It is shown that tilted importance sampling removes a lower bound to simulation error for given importance sample size that is inherent in simulated maximum likelihood using simple importance sampling, the main method for simulating likelihood functions in the statistics literature. In addit...

  6. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    Energy Technology Data Exchange (ETDEWEB)

    Lagerlöf, Jakob H., E-mail: Jakob@radfys.gu.se [Department of Radiation Physics, Göteborg University, Göteborg 41345 (Sweden); Kindblom, Jon [Department of Oncology, Sahlgrenska University Hospital, Göteborg 41345 (Sweden); Bernhardt, Peter [Department of Radiation Physics, Göteborg University, Göteborg 41345, Sweden and Department of Nuclear Medicine, Sahlgrenska University Hospital, Göteborg 41345 (Sweden)

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became

  7. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    International Nuclear Information System (INIS)

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO2)]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO2), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due

  8. Sodium void reactivity effect analysis using the newly developed exact perturbation theory in Monte-Carlo code TRIPOLI-4®

    International Nuclear Information System (INIS)

    The analysis of void reactivity effect is prominent interest for Sodium-cooled Fast Reactor (SFR) safety. Indeed, in case of sodium leakage of the primary circuit, void reactivity represents the main passive negative feedback to ensure reactivity control. The core can be designed to maximize neutron leakage and lower the average neutron multiplication factor in the event of sodium disappearing from within assemblies. Thus, the nuclear chain reaction is stopped. The most promising solution is to place a sodium region above the fuel in order for neutrons to be reflected when the region is filled and escape when the region is empty. In terms of simulation, this configuration is a challenge for usual calculation schemes: 1. Deterministic codes are typically limited in their ability to homogenize a sub-critical medium as the sodium plenum. 2. Monte Carlo codes are typically not able to split the total reactivity effect on different components, which prevents to achieve straightforward uncertainty analysis. Furthermore, since experimental values can sometimes be small, Monte Carlo codes may not converge within a reasonable computation time. A new feature recently available in the Monte Carlo TRIPOLI-4® based on the Exact Perturbation Theory allows very small reactivity perturbations to be computed accurately as well as reactivity effect to be estimated on distinct isotopes cross-sections. In the first part of this paper, this new feature of the code is described and then applied in the second part to a core configuration composed of several layers of fuel and fertile zones below a sodium plenum. Reactivity and its contributions from specific reactions and energy groups are calculated and compared with the results of the deterministic code ERANOS. The aim of this work is twofold: (1) Achieve a numerical validation of the new TRIPOLI-4® features and (2) Identify where deterministic codes might be less accurate and why – even when using them at full capacity (S16

  9. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    Energy Technology Data Exchange (ETDEWEB)

    McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  10. The applicability of certain Monte Carlo methods to the analysis of interacting polymers

    Energy Technology Data Exchange (ETDEWEB)

    Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)

    1998-05-01

    The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

  11. Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process

    International Nuclear Information System (INIS)

    The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author)

  12. Random vibration analysis of switching apparatus based on Monte Carlo method

    Institute of Scientific and Technical Information of China (English)

    ZHAI Guo-fu; CHEN Ying-hua; REN Wan-bin

    2007-01-01

    The performance in vibration environment of switching apparatus containing mechanical contact is an important element when judging the apparatus's reliability. A piecewise linear two-degrees-of-freedom mathematical model considering contact loss was built in this work, and the vibration performance of the model under random external Gaussian white noise excitation was investigated by using Monte Carlo simulation in Matlab/Simulink. Simulation showed that the spectral content and statistical characters of the contact force coincided strongly with reality. The random vibration character of the contact system was solved using time (numerical) domain simulation in this paper. Conclusions reached here are of great importance for reliability design of switching apparatus.

  13. XSBench. The development and verification of a performance abstraction for Monte Carlo reactor analysis

    International Nuclear Information System (INIS)

    We isolate the most computationally expensive steps of a robust nuclear reactor core Monte Carlo particle transport simulation. The hot kernel is then abstracted into a simplified proxy application, designed to mimic the key performance characteristics of the full application. A series of performance verification tests and analyses are carried out to investigate the low-level performance parameters of both the simplified kernel and the full application. The kernel's performance profile is found to closely match that of the application, making it a convenient test bed for performance analyses on cutting edge platforms and experimental next-generation high performance computing architectures. (author)

  14. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  15. A Monte Carlo simulation of neutron activation analysis of bulk objects

    Energy Technology Data Exchange (ETDEWEB)

    Fantidis, J.G. [Faculty of Engineering, Department of Electrical and Computer Engineering, Laboratory of Nuclear Technology, Democritus University of Thrace, Vas. Sofias 12, 67100 Xanthi (Greece); Nicolaou, G. [Faculty of Engineering, Department of Electrical and Computer Engineering, Laboratory of Nuclear Technology, Democritus University of Thrace, Vas. Sofias 12, 67100 Xanthi (Greece)], E-mail: nicolaou@ee.duth.gr; Tsagas, N.F. [Faculty of Engineering, Department of Electrical and Computer Engineering, Laboratory of Nuclear Technology, Democritus University of Thrace, Vas. Sofias 12, 67100 Xanthi (Greece)

    2009-03-15

    A PGNAA facility comprising an isotopic neutron source has been simulated using the Monte Carlo code MCNPX. The facility is envisaged for elemental composition studies of biomedical, environmental and industrial bulk objects. The study carried out, aimed to improve the detection sensitivity of prompt gamma-rays emitted by a bulk object, measured in the presence of higher energy ones. An appropriate collimator, a filter between the neutron source and the object and an optimisation of the positioning of the neutron beam and the detector relative to the object analysed were means to improve the desired sensitivity. The simulation is demonstrated for the in-vivo PGNAA of boron in the human liver.

  16. A Monte Carlo simulation of neutron activation analysis of bulk objects

    International Nuclear Information System (INIS)

    A PGNAA facility comprising an isotopic neutron source has been simulated using the Monte Carlo code MCNPX. The facility is envisaged for elemental composition studies of biomedical, environmental and industrial bulk objects. The study carried out, aimed to improve the detection sensitivity of prompt gamma-rays emitted by a bulk object, measured in the presence of higher energy ones. An appropriate collimator, a filter between the neutron source and the object and an optimisation of the positioning of the neutron beam and the detector relative to the object analysed were means to improve the desired sensitivity. The simulation is demonstrated for the in-vivo PGNAA of boron in the human liver.

  17. Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation

    CERN Document Server

    Churmakov, D Y; Piletsky, S A; Greenhalgh, D A

    2003-01-01

    A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto- fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an effective depth.

  18. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  19. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

    Science.gov (United States)

    Kim, Seock-Ho

    2001-01-01

    Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

  20. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    first use of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  1. Monte Carlo optimization of sample dimensions of an 241Am Be source-based PGNAA setup for water rejects analysis

    Science.gov (United States)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.

    2007-07-01

    The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  2. Final Technical Report - Large Deviation Methods for the Analysis and Design of Monte Carlo Schemes in Physics and Chemistry - DE-SC0002413

    Energy Technology Data Exchange (ETDEWEB)

    Dupuis, Paul [Brown University

    2014-03-14

    This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.

  3. Analysis of the tritium breeding ratio benchmark experiments using the Monte Carlo code TRIPOLI-4

    International Nuclear Information System (INIS)

    Tritium breeding is an essential element of fusion nuclear technology. A tritium breeding ratio greater than unity is necessary for self-sufficient fueling. To simulate the 14 MeV neutron transport in tritium breeding systems from the D-T fusion reaction, the 3D realistic modeling with Monte Carlo code and the point-wise nuclear data are recommended. Continuous-energy TRIPOLI-4 Monte Carlo transport code has been widely used on the radiation shielding, criticality safety, and fission reactor physics. For supporting the ITER TBM (test blanket module) neutronics study with TRIPOLI-4 code, this paper presents the TRIPOLI-4 simulation of TBR (tritium breeding ratio) for six OKTAVIAN spherical assemblies of Osaka University: Li, Li-C, Pb-Li, Pb-Li-C, Be-Li, and Be-Li-C. It also investigates the impact of nuclear data libraries on TBR calculations from ENDF/B-VI.4, ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, and FENDL-2.1. In general, TRIPOLI-4 produced satisfactory C/E values. Only beryllium of JEFF-3.1 library introduces higher uncertainties.

  4. Studying stellar binary systems with the Laser Interferometer Space Antenna using delayed rejection Markov chain Monte Carlo methods

    International Nuclear Information System (INIS)

    Bayesian analysis of Laser Interferometer Space Antenna (LISA) data sets based on Markov chain Monte Carlo methods has been shown to be a challenging problem, in part due to the complicated structure of the likelihood function consisting of several isolated local maxima that dramatically reduces the efficiency of the sampling techniques. Here we introduce a new fully Markovian algorithm, a delayed rejection Metropolis-Hastings Markov chain Monte Carlo method, to efficiently explore these kind of structures and we demonstrate its performance on selected LISA data sets containing a known number of stellar-mass binary signals embedded in Gaussian stationary noise.

  5. A Monte Carlo study of an energy-weighted algorithm for radionuclide analysis with a plastic scintillation detector

    International Nuclear Information System (INIS)

    Nuisance and false alarms due to naturally occurring radioactive material (NORM) are major problems facing radiation portal monitors (RPMs) for the screening of illicit radioactive materials in airports and ports. Based on energy-weighted counts, we suggest an algorithm that distinguishes radioactive nuclides with a plastic scintillation detector that has poor energy resolution. Our simulation study, using a Monte Carlo method, demonstrated that man-made radionuclides can be separated from NORM by using a conventional RPM. - Highlights: • Radiation portal monitor using plastic scintillator was modeled and the energy spectra of six radionuclides were assessed. • Energy-weighted algorithm which enables radionuclide analysis with plastic scintillator was suggested and evaluated. • The cases of moving and shielding effect were evaluated and simultaneous radionuclide identification was carried out. • Analysis of the simulated spectra with suggested method shows clear results to enable the radionuclide identification

  6. Improving PWR core simulations by Monte Carlo uncertainty analysis and Bayesian inference

    CERN Document Server

    Castro, Emilio; Buss, Oliver; Garcia-Herranz, Nuria; Hoefer, Axel; Porsch, Dieter

    2016-01-01

    A Monte Carlo-based Bayesian inference model is applied to the prediction of reactor operation parameters of a PWR nuclear power plant. In this non-perturbative framework, high-dimensional covariance information describing the uncertainty of microscopic nuclear data is combined with measured reactor operation data in order to provide statistically sound, well founded uncertainty estimates of integral parameters, such as the boron letdown curve and the burnup-dependent reactor power distribution. The performance of this methodology is assessed in a blind test approach, where we use measurements of a given reactor cycle to improve the prediction of the subsequent cycle. As it turns out, the resulting improvement of the prediction quality is impressive. In particular, the prediction uncertainty of the boron letdown curve, which is of utmost importance for the planning of the reactor cycle length, can be reduced by one order of magnitude by including the boron concentration measurement information of the previous...

  7. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation.

    Directory of Open Access Journals (Sweden)

    Chizhu Ding

    Full Text Available The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements.

  8. Analysis of aerial survey data on Florida manatee using Markov chain Monte Carlo.

    Science.gov (United States)

    Craig, B A; Newton, M A; Garrott, R A; Reynolds, J E; Wilcox, J R

    1997-06-01

    We assess population trends of the Atlantic coast population of Florida manatee, Trichechus manatus latirostris, by reanalyzing aerial survey data collected between 1982 and 1992. To do so, we develop an explicit biological model that accounts for the method by which the manatees are counted, the mammals' movement between surveys, and the behavior of the population total over time. Bayesian inference, enabled by Markov chain Monte Carlo, is used to combine the survey data with the biological model. We compute marginal posterior distributions for all model parameters and predictive distributions for future counts. Several conclusions, such as a decreasing population growth rate and low sighting probabilities, are consistent across different prior specifications. PMID:9192449

  9. Calculation and analysis of heat source of PWR assemblies based on Monte Carlo method

    International Nuclear Information System (INIS)

    When fission occurs in nuclear fuel in reactor core, it releases numerous neutron and γ radiation, which takes energy deposition in fuel components and yields many factors such as thermal stressing and radiation damage influencing the safe operation of a reactor. Using the three-dimensional Monte Carlo transport calculation program MCNP and continuous cross-section database based on ENDF/B series to calculate the heat rate of the heat source on reference assemblies of a PWR when loading with 18-month short refueling cycle mode, and get the precise values of the control rod, thimble plug and new burnable poison rod within Gd, so as to provide basis for reactor design and safety verification. (authors)

  10. Benchmark analysis of criticality experiments in the TRIGA mark II using a continuous energy Monte Carlo code MCNP

    International Nuclear Information System (INIS)

    The criticality analysis of the TRIGA-II benchmark experiment at the Musashi Institute of Technology Research Reactor (MuITR, 100kW) was performed by the three-dimensional continuous-energy Monte Carlo code (MCNP4A). To minimize errors due to an inexact geometry model, all fresh fuels and control rods as well as vicinity of the core were precisely modeled. Effective multiplication factors (keff) in the initial core critical experiment and in the excess reactivity adjustment for the several fuel-loading patterns as well as the fuel element reactivity worth distributions were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated keff overestimated the experimental data by about 1.0%Δk/k for both the initial core and the several fuel-loading arrangements (fuels or graphite elements were added only to the outer-ring), but the discrepancy increased to 1.8%Δk/k for the some fuel-loading patterns (graphite elements were inserted into the inner-ring). The comparison result of the fuel element worth distribution showed above tendency. All in all, the agreement between the MCNP predictions and the experimentally determined values is good, which indicates that the Monte Carlo model is enough to simulate criticality of the TRIGA-II reactor. (author)

  11. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream

    Institute of Scientific and Technical Information of China (English)

    TIAN Bao-guo; SI Ji-tao; ZHAO Yan; WANG Hong-tao; HAO Ji-ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.

  12. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream.

    Science.gov (United States)

    Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases. PMID:17915696

  13. Mass flow rate sensitivity and uncertainty analysis in natural circulation boiling water reactor core from Monte Carlo simulations

    International Nuclear Information System (INIS)

    Our aim was to evaluate the sensitivity and uncertainty of mass flow rate in the core on the performance of natural circulation boiling water reactor (NCBWR). This analysis was carried out through Monte Carlo simulations of sizes up to 40,000, and the size, i.e., repetition of 25,000 was considered as valid for routine applications. A simplified boiling water reactor (SBWR) was used as an application example of Monte Carlo method. The numerical code to simulate the SBWR performance considers a one-dimensional thermo-hydraulics model along with non-equilibrium thermodynamics and non-homogeneous flow approximation, one-dimensional fuel rod heat transfer. The neutron processes were simulated with a point reactor kinetics model with six groups of delayed neutrons. The sensitivity was evaluated in terms of 99% confidence intervals of the mean to understand the range of mean values that may represent the entire statistical population of performance variables. The regression analysis with mass flow rate as the predictor variable showed statistically valid linear correlations for both neutron flux and fuel temperature and quadratic relationship for the void fraction. No statistically valid correlation was observed for the total heat flux as a function of the mass flow rate although heat flux at individual nodes was positively correlated with this variable. These correlations are useful for the study, analysis and design of any NCBWR. The uncertainties were propagated as follows: for 10% change in the mass flow rate in the core, the responses for neutron power, total heat flux, average fuel temperature and average void fraction changed by 8.74%, 7.77%, 2.74% and 0.58%, respectively.

  14. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    International Nuclear Information System (INIS)

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  15. Likelihood based testing for no fractional cointegration

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    We consider two likelihood ratio tests, so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen's procedure to the fractional cointegration case....... The standard cointegration analysis only considers the assumption that deviations from equilibrium can be integrated of order zero, which is very restrictive in many cases and may imply an important loss of power in the fractional case. We consider the alternative hypotheses with equilibrium deviations...... that can be mean reverting with order of integration possibly greater than zero. Moreover, the degree of fractional cointegration is not assumed to be known, and the asymptotic null distribution of both tests is found when considering an interval of possible values. The power of the proposed tests under...

  16. Prediction and analysis of the time and energy resolution of scintillation-detectors by Monte-Carlo simulations

    International Nuclear Information System (INIS)

    A Monte-Carlo model for the emergence of scintillation-detector-signals is presented that allows for the prediction of certain scintillator-photomultiplier combinations' time and energy resolutions while relying primarily on their basic data-sheet properties like light yield, decay time, quantum efficiency, and transit time spread. At the same time the model provides a deeper understanding of the performance limiting factors and stimulates the development of improved methods for the analysis of detector output signals. The simulation results are compared to high-speed digitizer measurements of signals from a number of widely used scintillation materials like LYSO, BaF2, LaBr3, NaI, and others.

  17. Use of Monte Carlo simulation for computational analysis of critical systems on IPPE's facility addressing needs of nuclear safety

    International Nuclear Information System (INIS)

    The critical facility BFS-1 critical facility was built at the Institute of Physics and Power Engineering (Obninsk, Russia) for full-scale modeling of fast-reactor cores, blankets, in-vessel shielding, and storage. Whereas BFS-1 is a fast-reactor assembly; however, it is a very flexible assembly that can easily be reconfigured to represent numerous other types of reactor designs. This paper describes specific problems with calculation of evaluation neutron physics characteristics of integral experiments performed on BFS facility. The analysis available integral experiments performed on different critical configuration of BFS facility were performed. Calculations of criticality, central reaction rate ratios, and fission rate distributions were carried out by the MCNP5 Monte-Carlo code with different files of evaluated nuclear data. MCNP calculations with multigroup library with 299 energy groups were also made for comparison with pointwise library calculations. (authors)

  18. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  19. Monte Carlo simulation applied to order economic analysis Simulação de Monte Carlo aplicada à análise econômica de pedido

    Directory of Open Access Journals (Sweden)

    Abraão Freires Saraiva Júnior

    2011-03-01

    Full Text Available The use of mathematical and statistical methods can help managers to deal with decision-making difficulties in the business environment. Some of these decisions are related to productive capacity optimization in order to obtain greater economic gains for the company. Within this perspective, this study aims to present the establishment of metrics to support economic decisions related to process or not orders in a company whose products have great variability in variable direct costs per unit that generates accounting uncertainties. To achieve this objective, is proposed a five-step method built from the integration of Management Accounting and Operations Research techniques, emphasizing the Monte Carlo simulation. The method is applied from a didactic example which uses real data achieved through a field research carried out in a plastic products industry that employ recycled material. Finally, it is concluded that the Monte Carlo simulation is effective for treating variable direct costs per unit variability and that the proposed method is useful to support decision-making related to order acceptance.A utilização de métodos matemáticos e estatísticos pode auxiliar gestores a lidar com dificuldades do processo de tomada de decisão no ambiente de negócios. Algumas dessas decisões estão relacionadas à otimização da utilização da capacidade produtiva visando a obtenção de melhores resultados econômicos para a empresa. Dentro dessa perspectiva, o presente trabalho objetiva apresentar o estabelecimento de métricas que deem suporte à decisão econômica de atender ou não a pedidos em uma empresa cujos produtos têm grande variabilidade de custos variáveis diretos unitários que gera incertezas contábeis. Para cumprir esse objetivo, é proposto um método em cinco etapas, construído a partir da integração de técnicas provindas da contabilidade gerencial e da pesquisa operacional, com destaque à simulação de Monte Carlo. O m

  20. Analysis of the KANT experiment on beryllium using TRIPOLI-4 Monte Carlo code

    International Nuclear Information System (INIS)

    Beryllium is an important material in fusion technology for multiplying neutrons in blankets. However, beryllium nuclear data are differently presented in modern nuclear data evaluations. Recent investigations with the TRIPOLI-4 Monte Carlo simulation of the tritium breeding ratio (TBR) demonstrated that beryllium reaction data are the main source of the calculation uncertainties between ENDF/B-VII.0 and JEFF-3.1. To clarify the calculation uncertainties from data libraries on beryllium, in this study TRIPOLI-4 calculations of the Karlsruhe Neutron Transmission (KANT) experiment have been performed by using ENDF/B-VII.0 and new JEFF-3.1.1 data libraries. The KANT Experiment on beryllium has been used to validate neutron transport codes and nuclear data libraries. An elaborated KANT experiment benchmark has been compiled and published in the NEA/SINBAD database and it has been used as reference in the present work. The neutron multiplication in bulk beryllium assemblies was considered with a central D-T neutron source. Neutron leakage spectra through the 5, 10, and 17 cm thick spherical beryllium shells were calculated and five-group partial leakage multiplications were reported and discussed. In general, improved C/E ratios on neutron leakage multiplications have been obtained. Both ENDF/B-VII.0 and JEFF-3.1.1 beryllium data libraries of TRIPOLI-4 are acceptable now for fusion neutronics calculations.

  1. Monte Carlo Simulation of the EXO Gaseous Xenon Time Projection Chamber and Neural Network Analysis

    Science.gov (United States)

    Leonard, Francois

    Neutrinoless double beta decay has attracted much interest since its observation would reveal the neutrino masses and determine the Majorana nature of the particle. EXO is among the next generation of experiments dedicated to the search for this phenomenon. A part of the collaboration is developing a gas phase time projection chamber prototype to study the performance of this technique for measuring the half-life of neutrinoless double beta decay in 136Xe. A Monte Carlo simulation of this prototype has been developed using the Geant4 toolkit and the Garfield and Maxwell programs to simulate ionizing events in the detector, the production and propagation of the scintillation and electroluminescence signals and their distribution on CsI photocathodes. The simulation was used to study the uniformity of light deposition on the photocathodes, the effect of the natural gamma background radiation on the detector and its response to calibration gamma sources. Furthermore, data produced with this simulation were analyzed with a neural network algorithm using the multi-layer perceptron class implemented in ROOT. The performance of this algorithm was studied for vertex reconstruction of ionizing events in the detector as well as for classification of tracks for background rejection.

  2. Analysis of probabilistic short run marginal cost using Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez-Alcaraz, G.; Navarrete, N.; Tovar-Hernandez, J.H.; Fuerte-Esquivel, C.R. [Inst. Tecnologico de Morelia, Michoacan (Mexico). Dept. de Ing. Electrica y Electronica; Mota-Palomino, R. [Inst. Politecnico Nacional (Mexico). Escuela Superior de Ingenieria Mecanica y Electrica

    1999-11-01

    The structure of the Electricity Supply Industry is undergoing dramatic changes to provide new services options. The main aim of this restructuring is allowing generating units the freedom of selling electricity to anybody they wish at a price determined by market forces. Several methodologies have been proposed in order to quantify different costs associated with those new services offered by electrical utilities operating under a deregulated market. The new wave of pricing is heavily influenced by economic principles designed to price products to elastic market segments on the basis of marginal costs. Hence, spot pricing provides the economic structure for many of new services. At the same time, the pricing is influenced by uncertainties associated to the electric system state variables which defined its operating point. In this paper, nodal probabilistic short run marginal costs are calculated, considering as random variables the load, the production cost and availability of generators. The effect of the electrical network is evaluated taking into account linearized models. A thermal economic dispatch is used to simulate each operational condition generated by Monte Carlo method on small fictitious power system in order to assess the effect of the random variables on the energy trading. First, this is carry out by introducing each random variable one by one, and finally considering the random interaction of all of them.

  3. Analysis of carbon deposition on the first wall of LHD by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Deposition of impurities on surfaces of plasma confinement devices is one of essential issues in present devices and also future fusion devices. In the Large Helical Device (LHD), it is necessary to reveal fundamental characteristics of impurity transport and deposition by simulation studies along with experimental studies. In the present paper, simulation scheme of carbon deposition on the first wall of LHD and results are discussed. The geometry of the LHD divertor and the configuration of the plasma are newly implemented to the Monte Carlo code ERO. The profiles of the background plasma is calculated numerically by a 1D two-fluid model along a magnetic field line. Spatial distributions of the carbon impurities are investigated for a typical set of plasma parameters in LHD. The simulation results indicate that the deposition is caused by neutral carbon particles from two facing divertor plates. The divertor opposite to the first wall makes less contributions than the adjacent one because of the ionization in the divertor plasma. Chemically sputtered impurities cause more deposition near the divertor than physical ones because atomic processes of methane molecules lead to isotropic particle velocities (copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  4. Economic analysis using Monte Carlo simulation on Xs reservoir Badak field east Kalimantan

    International Nuclear Information System (INIS)

    Badak field, located in the delta of mahakam river, in east kalimantan, is a gas producer. the field was found in 1972 by VICO. Badak field is the main gas supplier to bontang LNG and gas is exported to japan, south korea and taiwan, as well as utilized for the main feed to the east kalimantan fertilizer plant. To provide the gas demand, field development as well as exploration wells are continued. on these exploration wells, gas in place determination, gas production rate as well as economic evaluation play on important role. the effect of altering gas production rate to net present value and also the effect of altering discounted factor to the rate of return curve using monte carlo simulation is presented on this paper. based on the simulation results it is obtained that the upper limit of the initial gas in place is 1.82 BSCF, the lower limit is 0.27 BSCF and the most likely million US $ with a rate of return ranges from - 30 to 33.5 percent

  5. Monte Carlo analysis of the Neutron Standards Laboratory of the CIEMAT

    International Nuclear Information System (INIS)

    By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: 241AmBe and 252Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)

  6. Comparison of Sensitivity Analysis Techniques in Monte Carlo Codes for Multi-Region Criticality Calculations

    International Nuclear Information System (INIS)

    Recently, sensitivity and uncertainty (S/U) techniques have been used to determine the area of applicability (AOA) of critical experiments used for code and data validation. These techniques require the computation of energy-dependent sensitivity coefficients for multiple reaction types for every nuclide in each system included in the validation. The sensitivity coefficients, as used for this application, predict the relative change in the system multiplication factor due to a relative change in a given cross-section data component or material number density. Thus, a sensitivity coefficient, S, for some macroscopic cross section, Σ, is expressed as S = Σ/k ∂k/∂Σ, where k is the effective neutron multiplication factor for the system. The sensitivity coefficient for the density of a material is equivalent to that of the total macroscopic cross section. Two distinct techniques have been employed in Monte Carlo radiation transport codes for the computation of sensitivity coefficients. The first, and most commonly employed, is the differential sampling technique. The second is the adjoint-based perturbation theory approach. This paper briefly describes each technique and presents the results of a simple test case, pointing out discrepancies in the computed results and proposing a remedy to these discrepancies

  7. Markov chain Monte Carlo based analysis of post-translationally modified VDAC1 gating kinetics

    Directory of Open Access Journals (Sweden)

    Shivendra eTewari

    2015-01-01

    Full Text Available The voltage-dependent anion channel (VDAC is the main conduit for permeation of solutes (including nucleotides and metabolites of up to 5 kDa across the mitochondrial outer membrane (MOM. Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs. Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC method. This developed method describes three distinct conducting states (open, half-open, and closed of VDAC1 activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggests that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.

  8. PDF Weaving - Linking Inventory Data and Monte Carlo Uncertainty Analysis in the Study of how Disturbance Affects Forest Carbon Storage

    Science.gov (United States)

    Healey, S. P.; Patterson, P.; Garrard, C.

    2014-12-01

    Altered disturbance regimes are likely a primary mechanism by which a changing climate will affect storage of carbon in forested ecosystems. Accordingly, the National Forest System (NFS) has been mandated to assess the role of disturbance (harvests, fires, insects, etc.) on carbon storage in each of its planning units. We have developed a process which combines 1990-era maps of forest structure and composition with high-quality maps of subsequent disturbance type and magnitude to track the impact of disturbance on carbon storage. This process, called the Forest Carbon Management Framework (ForCaMF), uses the maps to apply empirically calibrated carbon dynamics built into a widely used management tool, the Forest Vegetation Simulator (FVS). While ForCaMF offers locally specific insights into the effect of historical or hypothetical disturbance trends on carbon storage, its dependence upon the interaction of several maps and a carbon model poses a complex challenge in terms of tracking uncertainty. Monte Carlo analysis is an attractive option for tracking the combined effects of error in several constituent inputs as they impact overall uncertainty. Monte Carlo methods iteratively simulate alternative values for each input and quantify how much outputs vary as a result. Variation of each input is controlled by a Probability Density Function (PDF). We introduce a technique called "PDF Weaving," which constructs PDFs that ensure that simulated uncertainty precisely aligns with uncertainty estimates that can be derived from inventory data. This hard link with inventory data (derived in this case from FIA - the US Forest Service Forest Inventory and Analysis program) both provides empirical calibration and establishes consistency with other types of assessments (e.g., habitat and water) for which NFS depends upon FIA data. Results from the NFS Northern Region will be used to illustrate PDF weaving and insights gained from ForCaMF about the role of disturbance in carbon

  9. Bayesian Analysis of Multivariate Probit Models

    OpenAIRE

    Siddhartha Chib; Edward Greenberg

    1996-01-01

    This paper provides a unified simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods, and maximum likelihood estimates are obtained by a Markov chain Monte Carlo version of the E-M algorithm. Computation of Bayes factors from the simulation output is also considered. The methods are applied to a bivariate data set, to a 534-subject, four-year longitudinal dat...

  10. Investigation of the Power Coefficient of Reactivity of 3D CANDU Reactor through Detailed Monte Carlo Analysis

    International Nuclear Information System (INIS)

    The heat is removed by the heavy water coolant completely separated from stationary moderator. Due to the good neutron economy of the CANDU reactor, natural uranium fuel is used without enrichment. Because of the unique core configuration characteristic, there is less resonance absorption of neutron in fuel which leads to a relatively small fuel temperature coefficient (FTC). The value of FTC can even be positive due to the 239Pu buildup during the fuel depletion and also the neutron up-scattering by the oxygen atoms in the fuel. Unlike the pressurized light water reactor, it is well known that CANDU-6 has a positive coolant void reactivity (CVR) and coolant temperature coefficient (CTC). In a traditional reactor analysis, the asymptotic scattering kernel has been used and neglects the thermal motion of nuclides such as U-238. However, it is well accepted that in a scattering reaction, the thermal movement of the target can affect the scattering reaction in the vicinity of scattering resonance and enhance neutron capture by the capture resonance. Some recent works have revealed that the thermal motion of U-238 affects the scattering reaction and that the resulting Doppler broadening of the scattering resonance enhances the FTC of the thermal reactor including PWRs by 10- 15%. In order to observe the impacts of the Doppler broadening of the scattering resonances on the criticality and FTC, a recent investigation was done for a clean and fresh CANDU fuel lattice using Monte Carlo code MCNPX for analysis.. In ref. 3 the so-called DBRC (Doppler Broadened Rejection Correction) method was adopted to consider the thermal movement of U-238. In this study, the safety parameter of CANDU-6 is re-evaluated by using the continuous energy Monte Carlo code SERPENT 2 which uses the DBRC method to simulate the thermal motion of U-238. The analysis is performed for a full 3-D CANDU-6 core and the PCR is evaluated near equilibrium burnup. For a high-fidelity Monte Carlo calculation

  11. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis

    Science.gov (United States)

    Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

    2014-05-01

    Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution

  12. Analysis of the Tandem Calibration Method for Kerma Area Product Meters Via Monte Carlo Simulations

    International Nuclear Information System (INIS)

    The IAEA recommends that uncertainties of dosimetric measurements in diagnostic radiology for risk assessment and quality assurance should be less than 7% on the confidence level of 95%. This accuracy is difficult to achieve with kerma area product (KAP) meters currently used in clinics. The reasons range from the high energy dependence of KAP meters to the wide variety of configurations in which KAP meters are used and calibrated. The tandem calibration method introduced by Poeyry, Komppa and Kosunen in 2005 has the potential to make the calibration procedure simpler and more accurate compared to the traditional beam-area method. In this method, two positions of the reference KAP meter are of interest: (a) a position close to the field KAP meter and (b) a position 20 cm above the couch. In the close position, the distance between the two KAP meters should be at least 30 cm to reduce the effect of back scatter. For the other position, which is recommended for the beam-area calibration method, the distance of 70 cm between the KAP meters was used in this study. The aim of this work was to complement existing experimental data comparing the two configurations with Monte Carlo (MC) simulations. In a geometry consisting of a simplified model of the VacuTec 70157 type KAP meter, the MCNP code was used to simulate the kerma area product, PKA, for the two (close and distant) reference planes. It was found that PKA values for the tube voltage of 40 kV were about 2.5% lower for the distant plane than for the close one. For higher tube voltages, the difference was smaller. The difference was mainly caused by attenuation of the X ray beam in air. Since the problem with high uncertainties in PKA measurements is also caused by the current design of X ray machines, possible solutions are discussed. (author)

  13. BOT3P: a mesh generation software package for transport analysis with deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    BOT3P consists of a set of standard Fortran 77 language programs that gives the users of the deterministic transport codes DORT, TORT, TWODANT, THREEDANT, PARTISN and the sensitivity code SUSD3D some useful diagnostic tools to prepare and check the geometry of their input data files for both Cartesian and cylindrical geometries, including graphical display modules. Users can produce the geometrical and material distribution data for all the cited codes for both two-dimensional and three-dimensional applications and, only in 3-dimensional Cartesian geometry, for the Monte Carlo Transport Code MCNP, starting from the same BOT3P input. Moreover, BOT3P stores the fine mesh arrays and the material zone map in a binary file, the content of which can be easily interfaced to any deterministic and Monte Carlo transport code. This makes it possible to compare directly for the same geometry the effects stemming from the use of different data libraries and solution approaches on transport analysis results. BOT3P Version 5.0 lets users optionally and with the desired precision compute the area/volume error of material zones with respect to the theoretical values, if any, because of the stair-cased representation of the geometry, and automatically update material densities on the whole zone domains to conserve masses. A local (per mesh) density correction approach is also available. BOT3P is designed to run on Linux/UNIX platforms and is publicly available from the Organization for Economic Cooperation and Development (OECD/NEA)/Nuclear Energy Agency Data Bank. Through the use of BOT3P, radiation transport problems with complex 3-dimensional geometrical structures can be modelled easily, as a relatively small amount of engineer-time is required and refinement is achieved by changing few parameters. This tool is useful for solving very large challenging problems, as successfully demonstrated not only in some complex neutron shielding and criticality benchmarks but also in a power

  14. Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent.

    Science.gov (United States)

    Lohse, Konrad; Chmelik, Martin; Martin, Simon H; Barton, Nicholas H

    2016-02-01

    The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proved difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple, recursive form, the size of such likelihood calculations explodes quickly with the number of individuals and applications of this framework have so far been mainly limited to small samples (pairs and triplets) for which the GF can be written by hand. Here we investigate several strategies for exploiting the inherent symmetries of the coalescent. In particular, we show that the GF of genealogies can be decomposed into a set of equivalence classes that allows likelihood calculations from nontrivial samples. Using this strategy, we automated blockwise likelihood calculations for a general set of demographic scenarios in Mathematica. These histories may involve population size changes, continuous migration, discrete divergence, and admixture between multiple populations. To give a concrete example, we calculate the likelihood for a model of isolation with migration (IM), assuming two diploid samples without phase and outgroup information. We demonstrate the new inference scheme with an analysis of two individual butterfly genomes from the sister species Heliconius melpomene rosina and H. cydno. PMID:26715666

  15. Monte Carlo analysis of the Neutron Standards Laboratory of the CIEMAT; Analisis Monte Carlo del Laboratorio de Patrones Neutronicos del CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Mendez V, R. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Av. Complutense 40, 28040 Madrid (Spain); Guzman G, K. A., E-mail: fermineutron@yahoo.com [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2014-10-15

    By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: {sup 241}AmBe and {sup 252}Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)

  16. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    Science.gov (United States)

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. PMID:21764476

  17. Troubled Conception: Negotiating the Likelihood of Having Children

    Science.gov (United States)

    May, Marian

    2007-01-01

    In the context of low fertility and Australia's ageing population, a national longitudinal telephone survey, "Negotiating the Life Course" (NLC), asks women about their childbearing intentions. This paper uses conversation analysis (CA) to examine interaction between an interviewer and respondents on one NLC question about the likelihood of having…

  18. Monte Carlo analysis of direct measurements of the thermal eta (.025 eV) for 233U and 235U (LWBR development program)

    International Nuclear Information System (INIS)

    Significant inconsistencies have been observed between measured values of eta and of ν, which are related by eta = ν/(1+α). In support of the LWBR program, manganese bath measurements of eta of 233U and 235U employing monoenergetic 0.025 eV neutrons were analyzed using Monte Carlo methods and ENDF-4 cross sections. The calculated (eta*/eta2200) ratios are essentially independent of the values assumed for eta2200. The standard deviation on our calculated values of eta includes Monte Carlo, cross section, and experimental uncertainties. The Monte Carlo analysis was confirmed by calculating measured quantities used by the experimentalists in their reduction of eta* to eta. (4 figures, 12 tables) (U.S.)

  19. In-silico analysis on biofabricating vascular networks using kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    We present a computational modeling approach to study the fusion of multicellular aggregate systems in a novel scaffold-less biofabrication process, known as ‘bioprinting’. In this novel technology, live multicellular aggregates are used as fundamental building blocks to make tissues or organs (collectively known as the bio-constructs,) via the layer-by-layer deposition technique or other methods; the printed bio-constructs embedded in maturogens, consisting of nutrient-rich bio-compatible hydrogels, are then placed in bioreactors to undergo the cellular aggregate fusion process to form the desired functional bio-structures. Our approach reported here is an agent-based modeling method, which uses the kinetic Monte Carlo (KMC) algorithm to evolve the cellular system on a lattice. In this method, the cells and the hydrogel media, in which cells are embedded, are coarse-grained to material’s points on a three-dimensional (3D) lattice, where the cell–cell and cell–medium interactions are quantified by adhesion and cohesion energies. In a multicellular aggregate system with a fixed number of cells and fixed amount of hydrogel media, where the effect of cell differentiation, proliferation and death are tactically neglected, the interaction energy is primarily dictated by the interfacial energy between cell and cell as well as between cell and medium particles on the lattice, respectively, based on the differential adhesion hypothesis. By using the transition state theory to track the time evolution of the multicellular system while minimizing the interfacial energy, KMC is shown to be an efficient time-dependent simulation tool to study the evolution of the multicellular aggregate system. In this study, numerical experiments are presented to simulate fusion and cell sorting during the biofabrication process of vascular networks, in which the bio-constructs are fabricated via engineering designs. The results predict the feasibility of fabricating the vascular

  20. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    OpenAIRE

    Jensen Just; Madsen Per; Sorensen Daniel; Klemetsdal Gunnar; Heringstad Bjørg; Øegård Jørgen; Gianola Daniel; Detilleux Johann

    2004-01-01

    Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out ...

  1. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    OpenAIRE

    Gianola, Daniel; Ødegaard, Jørgen; Heringstad, B; Klemetsdal, G; Sorensen, Daniel; Madsen, Per; Jensen, Just; Detilleux, J

    2004-01-01

    A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gib...

  2. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence

    Science.gov (United States)

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The “missing heritability” has been suggested to be due to lack of studies focused on epistasis, also called gene–gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called “Epistasis Test in Meta-Analysis” (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene–gene interactions in the renin–angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html]. PMID:27045371

  3. BOOTSTRAPPING AND MONTE CARLO METHODS OF POWER ANALYSIS USED TO ESTABLISH CONDITION CATEGORIES FOR BIOTIC INDICES

    Science.gov (United States)

    Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...

  4. Likelihood-ratio ranking of gravitational-wave candidates in a non-Gaussian background

    OpenAIRE

    Biswas, R.; Brady, P.; Burguet-Castell, J.; Cannon, K.; J. Clayton; Dietz, A; Fotopoulos, N.; Goggin, L.; Keppel, D.; Pankow, C.; Price, L.; Vaulin, R.

    2012-01-01

    We describe a general approach to detection of transient gravitational-wave signals in the presence of non-Gaussian background noise. We prove that under quite general conditions, the ratio of the likelihood of observed data to contain a signal to the likelihood of it being a noise fluctuation provides optimal ranking for the candidate events found in an experiment. The likelihood-ratio ranking allows us to combine different kinds of data into a single analysis. We apply the gener...

  5. The Maximum Likelihood Threshold of a Graph

    OpenAIRE

    Gross, Elizabeth; Sullivant, Seth

    2014-01-01

    The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. We show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph $G$ is an independent set in the $n-1$-dimensional generic rigidity matroid, then the maximum likelihood threshold of $G$ is less than or equal to $n$. This c...

  6. A primer on applying Monte Carlo simulation, real options analysis, knowledge value added, forecasting, and portfolio optimization / by Johnathan Mun, Thomas Housel.

    OpenAIRE

    Mun, Johnathan; Housel, Thomas

    2010-01-01

    In this quick primer, advanced quantitative risk-based concepts will be introduced--namely, the hands-on applications of Monte Carlo simulation, real options analysis, stochastic forecasting, portfolio optimization, and knowledge value added. These methodologies rely on common metrics and existing techniques (e.g., return on investment, discounted cash flow, cost-based analysis, and so forth), and complement these traditional techniques by pushing the envelope of analytics, not replacing them...

  7. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  8. Factor Analysis with Ordinal Indicators: A Monte Carlo Study Comparing DWLS and ULS Estimation

    Science.gov (United States)

    Forero, Carlos G.; Maydeu-Olivares, Alberto; Gallardo-Pujol, David

    2009-01-01

    Factor analysis models with ordinal indicators are often estimated using a 3-stage procedure where the last stage involves obtaining parameter estimates by least squares from the sample polychoric correlations. A simulation study involving 324 conditions (1,000 replications per condition) was performed to compare the performance of diagonally…

  9. Likelihood smoothing using gravitational wave surrogate models

    CERN Document Server

    Cole, Robert H

    2014-01-01

    Likelihood surfaces in the parameter space of gravitational wave signals can contain many secondary maxima, which can prevent search algorithms from finding the global peak and correctly mapping the distribution. Traditional schemes to mitigate this problem maintain the number of secondary maxima and thus retain the possibility that the global maximum will remain undiscovered. By contrast, the recently proposed technique of likelihood transform can modify the structure of the likelihood surface to reduce its complexity. We present a practical method to carry out a likelihood transform using a Gaussian smoothing kernel, utilising gravitational wave surrogate models to perform the smoothing operation analytically. We demonstrate the approach with Newtonian and post-Newtonian waveform models for an inspiralling circular compact binary.

  10. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  11. Analysis of Unknown Radioactive Samples Spectra by Using (K0IAEA) Monte Carlo Code

    International Nuclear Information System (INIS)

    Studying and measuring of gamma-ray energies emitted from radionuclides are very important field of radiation physics, and have many applications in different fields of sciences such as in the study of nuclear structure, identification of radioisotopes and their activities, estimating absorbed doses, and determination of nuclear reaction cross sections. The new developments in gamma-ray spectrometry have expanded and have been applied in diverse fields such as astrophysics and medical therapy for which highly accurate measurements of gamma-rays are needed. This has been achieved by way of tracing the interaction of gamma-rays in the semiconductor and scintillation detectors and the energy deposited within. This thesis is concerned with the detector Full Energy Peak Efficiency (FEPE). The peak efficiency assumes only those interactions that deposit the full energy of the incident radiation and counted in a differential pulse height distribution. These full energy events are normally evidence by a peak that appears at the highest end of the spectrum. Events that deposit only part of the incident radiation energy then will appear farther to the left in the spectrum. The number of full energy events can be obtained by simply integrating the total area under the peak. In this work, two identical isotropic neutron sources of Am-Be type, each having an activity of about (175 GBq), were used for irradiating and analyzing unknown samples. Two types of foils,175In and 197Au, were used for monitoring the thermal neutron flux by the foil activation method. A hyper pure germanium (HPGe) detector was used in view of its good energy resolution and good signal-to-noise ratio. The γ-lines with highest intensity were selected and the induced activity analysis was done using Genie 2000 software. The analysis of the γ- spectra was carried out by using K0-IAEA and ETNA programs. The validity of the analysis was confirmed by neutron activation analysis of known and unknown samples.

  12. An Abstract Monte-Carlo Method for the Analysis of Probabilistic Programs

    OpenAIRE

    Monniaux, David

    2007-01-01

    We introduce a new method, combination of random testing and abstract interpretation, for the analysis of programs featuring both probabilistic and non-probabilistic nondeterminism. After introducing "ordinary" testing, we show how to combine testing and abstract interpretation and give formulas linking the precision of the results to the number of iterations. We then discuss complexity and optimization issues and end with some experimental results.

  13. Likelihood free inference for Markov processes: a comparison.

    Science.gov (United States)

    Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S

    2015-04-01

    Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space. PMID:25720092

  14. Comparative and sensitive analysis for parabolic trough solar collectors with a detailed Monte Carlo ray-tracing optical model

    International Nuclear Information System (INIS)

    Highlights: • It is to present comparative and sensitive analysis for PTCs with the MCRT method. • A detailed PTC optical model was developed based on a novel unified MCRT model. • Reference data determined by the divergence effect is useful to design a better PTC. • Different PTCs have different levels of sensitivity to different optical errors. • There are no contradictions between accuracy requirements of different parameters. - Abstract: This paper presents the numerical results of the comparative and sensitive analysis for different parabolic trough solar collector (PTC) systems under different operating conditions, expecting to optimize the PTC system of better comprehensive characteristics and optical performance or to evaluate the accuracy required for future constructions. A more detailed optical model was developed from a previously proposed unified Monte Carlo ray-tracing (MCRT) model. Numerical results were compared with the reference data and good agreements were obtained, proving that the model and the numerical results are reliable. Then the comparative and sensitive analyses for different PTC systems or different geometric parameters under different possible operating conditions were carried out by this detailed optical model. From the numerical results it is revealed that the ideal comprehensive characteristics and optical performance of the PTC systems are very different from some critical points determined by the divergence phenomenon of the non-parallel solar beam, which can also be well explained by the theoretical analysis results. For different operating conditions, the PTC systems of different geometric parameters have different levels of sensitivity to different optical errors, but the optical accuracy requirements from different geometric parameters of the whole PTC system are always consistent

  15. Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode

    Science.gov (United States)

    Olivero, P.; Forneris, J.; Gamarra, P.; Jakšić, M.; Giudice, A. Lo; Manfredotti, C.; Pastuović, Ž.; Skukan, N.; Vittone, E.

    2011-10-01

    The transport properties of a 4H-SiC Schottky diode have been investigated by the ion beam induced charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 μm. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combination of these two computational methods enabled an exhaustive interpretation of the experimental profiles and allowed an accurate evaluation both of the electrical characteristics of the active region (e.g. electric field profiles) and of basic transport parameters (i.e. diffusion length and minority carrier lifetime).

  16. An Analysis on the Characteristic of Multi-response CADIS Method for the Monte Carlo Radiation Shielding Calculation

    International Nuclear Information System (INIS)

    It uses the deterministic method to calculate adjoint fluxes for the decision of the parameters used in the variance reductions. This is called as hybrid Monte Carlo method. The CADIS method, however, has a limitation to reduce the stochastic errors of all responses. The Forward Weighted CADIS (FW-CADIS) was introduced to solve this problem. To reduce the overall stochastic errors of the responses, the forward flux is used. In the previous study, the Multi-Response CADIS (MR-CAIDS) method was derived for minimizing sum of each squared relative error. In this study, the characteristic of the MR-CADIS method was evaluated and compared with the FW-CADIS method. In this study, how the CADIS, FW-CADIS, and MR-CADIS methods are applied to optimize and decide the parameters used in the variance reduction techniques was analyzed. The MR-CADIS Method uses a technique that the sum of squared relative error in each tally region was minimized to achieve uniform uncertainty. To compare the simulation efficiency of the methods, a simple shielding problem was evaluated. Using FW-CADIS method, it was evaluated that the average of the relative errors was minimized; however, MR-CADIS method gives a lowest variance of the relative errors. Analysis shows that, MR-CADIS method can efficiently and uniformly reduce the relative error of the plural response problem than FW-CADIS method

  17. Analysis of the neutrons dispersion in a semi-infinite medium based in transport theory and the Monte Carlo method

    International Nuclear Information System (INIS)

    In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., μο=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)

  18. Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode

    CERN Document Server

    Olivero, P; Gamarra, P; Jaksic, M; Giudice, A Lo; Manfredotti, C; Pastuovic, Z; Skukan, N; Vittone, E

    2016-01-01

    The transport properties of a 4H-SiC Schottky diode have been investigated by the Ion Beam Induced Charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 um. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combina...

  19. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)

    Science.gov (United States)

    Hansson, Marie; Isaksson, Mats

    2007-04-01

    X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.

  20. A Proposal on the Method of Real Uncertainty Estimation in Two Step Monte Carlo Simulation for Residual Radiation Analysis

    International Nuclear Information System (INIS)

    There are many problems related to multi-step Monte Carlo (MC) calculation. Surface Source Reading (SSR) and Surface Source Writing (SSW) options in MCNP, MC depletion calculation, accelerator shielding analysis using secondary particle source term calculation, and residual particle transport calculation caused by activation are the examples of the simulations. In these problems, the average values estimated from the MC result in the previous step are used as sources of MC simulation in the next step. Hence, the uncertainties of the results in previous step are usually not considered for calculating that of next step MC simulation even though they are propagated as the stepwise progression. In this study, a new method using the forward-adjoint calculation and the union tally is proposed for the estimation of real uncertainty. For the activation benchmark problems the responses and real uncertainties were estimated by using the proposed method. And, the results were compared with those estimated by the brute force technique and the adjoint-based approach. The result shows that the proposed approach gives an accurate result comparing with the reference results

  1. A MARKOV CHAIN MONTE CARLO ALGORITHM FOR ANALYSIS OF LOW SIGNAL-TO-NOISE COSMIC MICROWAVE BACKGROUND DATA

    International Nuclear Information System (INIS)

    We present a new Markov Chain Monte Carlo (MCMC) algorithm for cosmic microwave background (CMB) analysis in the low signal-to-noise regime. This method builds on and complements the previously described CMB Gibbs sampler, and effectively solves the low signal-to-noise inefficiency problem of the direct Gibbs sampler. The new algorithm is a simple Metropolis-Hastings sampler with a general proposal rule for the power spectrum, C l, followed by a particular deterministic rescaling operation of the sky signal, s. The acceptance probability for this joint move depends on the sky map only through the difference of χ2 between the original and proposed sky sample, which is close to unity in the low signal-to-noise regime. The algorithm is completed by alternating this move with a standard Gibbs move. Together, these two proposals constitute a computationally efficient algorithm for mapping out the full joint CMB posterior, both in the high and low signal-to-noise regimes.

  2. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  3. Monte Carlo Few-Group Constant Generation for CANDU 6 Core Analysis

    OpenAIRE

    Seung Yeol Yoo; Hyung Jin Shim; Chang Hyo Kim

    2015-01-01

    The current neutronics design methodology of CANDU-PHWRs based on the two-step calculations requires determining not only homogenized two-group constants for ordinary fuel bundle lattice cells by the WIMS-AECL lattice cell code but also incremental two-group constants arising from the penetration of control devices into the fuel bundle cells by a supercell analysis code like MULTICELL or DRAGON. As an alternative way to generate the two-group constants necessary for the CANDU-PHWR core analys...

  4. Appraisal of Airport Alternatives in Greenland by the use of Risk Analysis and Monte Carlo Simulation

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2007-01-01

    construction cost and the travel time sav-ings. The obtained model results aim to provide an input to informed decision-making based on an account of the level of desired risk as concerns feasibility risks. This level is presented as the probability of obtaining at least a benefit-cost ratio of a specified......This paper presents an appraisal study of three different airport proposals in Greenland by the use of an adapted version of the Danish CBA-DK model. The assessment model is based on both a deterministic calculation by the use of conventional cost-benefit analysis and a stochastic calculation...

  5. Efficient scatter modelling for incorporation in maximum likelihood reconstruction

    International Nuclear Information System (INIS)

    Definition of a simplified model of scatter which can be incorporated in maximum likelihood reconstruction for single-photon emission tomography (SPET) continues to be appealing; however, implementation must be efficient for it to be clinically applicable. In this paper an efficient algorithm for scatter estimation is described in which the spatial scatter distribution is implemented as a spatially invariant convolution for points of constant depth in tissue. The scatter estimate is weighted by a space-dependent build-up factor based on the measured attenuation in tissue. Monte Carlo simulation of a realistic thorax phantom was used to validate this approach. Further efficiency was introduced by estimating scatter once after a small number of iterations using the ordered subsets expectation maximisation (OSEM) reconstruction algorithm. The scatter estimate was incorporated as a constant term in subsequent iterations rather than modifying the scatter estimate each iteration. Monte Carlo simulation was used to demonstrate that the scatter estimate does not change significantly provided at least two iterations OSEM reconstruction, subset size 8, is used. Complete scatter-corrected reconstruction of 64 projections of 40 x 128 pixels was achieved in 38 min using a Sun Sparc20 computer. (orig.)

  6. The Monte Carlo method for shielding calculations analysis by MORSE code of a streaming case in the CAORSO BWR power reactor shielding (Italy)

    International Nuclear Information System (INIS)

    In the field of shielding, the requirement of radiation transport calculations in severe conditions, characterized by irreducible three-dimensional geometries has increased the use of the Monte Carlo method. The latter has proved to be the only rigorous and appropriate calculational method in such conditions. However, further efforts at optimization are still necessary to render the technique practically efficient, despite recent improvements in the Monte Carlo codes, the progress made in the field of computers and the availability of accurate nuclear data. Moreover, the personal experience acquired in the field and the control of sophisticated calculation procedures are of the utmost importance. The aim of the work which has been carried out is the gathering of all the necessary elements and features that would lead to an efficient utilization of the Monte Carlo method used in connection with shielding problems. The study of the general aspects of the method and the exploitation techniques of the MORSE code, which has proved to be one of the most comprehensive of the Monte Carlo codes, lead to a successful analysis of an actual case. In fact, the severe conditions and difficulties met have been overcome using such a stochastic simulation code. Finally, a critical comparison between calculated and high-accuracy experimental results has allowed the final confirmation of the methodology used by us

  7. Sensitivity Analysis of the Sheet Metal Stamping Processes Based on Inverse Finite Element Modeling and Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box

  8. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    Science.gov (United States)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model

  9. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  10. Maximum likelihood decay curve fits by the simplex method

    International Nuclear Information System (INIS)

    A multicomponent decay curve analysis technique has been developed and incorporated into the decay curve fitting computer code, MLDS (maximum likelihood decay by the simplex method). The fitting criteria are based on the maximum likelihood technique for decay curves made up of time binned events. The probabilities used in the likelihood functions are based on the Poisson distribution, so decay curves constructed from a small number of events are treated correctly. A simple utility is included which allows the use of discrete event times, rather than time-binned data, to make maximum use of the decay information. The search for the maximum in the multidimensional likelihood surface for multi-component fits is performed by the simplex method, which makes the success of the iterative fits extremely insensitive to the initial values of the fit parameters and eliminates the problems of divergence. The simplex method also avoids the problem of programming the partial derivatives of the decay curves with respect to all the variable parameters, which makes the implementation of new types of decay curves straightforward. Any of the decay curve parameters can be fixed or allowed to vary. Asymmetric error limits for each of the free parameters, which do not consider the covariance of the other free parameters, are determined. A procedure is presented for determining the error limits which contain the associated covariances. The curve fitting procedure in MLDS can easily be adapted for fits to other curves with any functional form. (orig.)

  11. Acoustic effects analysis utilizing speckle pattern with fixed-particle Monte Carlo

    Science.gov (United States)

    Vakili, Ali; Hollmann, Joseph A.; Holt, R. Glynn; DiMarzio, Charles A.

    2016-03-01

    Optical imaging in a turbid medium is limited because of multiple scattering a photon undergoes while traveling through the medium. Therefore, optical imaging is unable to provide high resolution information deep in the medium. In the case of soft tissue, acoustic waves unlike light, can travel through the medium with negligible scattering. However, acoustic waves cannot provide medically relevant contrast as good as light. Hybrid solutions have been applied to use the benefits of both imaging methods. A focused acoustic wave generates a force inside an acoustically absorbing medium known as acoustic radiation force (ARF). ARF induces particle displacement within the medium. The amount of displacement is a function of mechanical properties of the medium and the applied force. To monitor the displacement induced by the ARF, speckle pattern analysis can be used. The speckle pattern is the result of interfering optical waves with different phases. As light travels through the medium, it undergoes several scattering events. Hence, it generates different scattering paths which depends on the location of the particles. Light waves that travel along these paths have different phases (different optical path lengths). ARF induces displacement to scatterers within the acoustic focal volume, and changes the optical path length. In addition, temperature rise due to conversion of absorbed acoustic energy to heat, changes the index of refraction and therefore, changes the optical path length of the scattering paths. The result is a change in the speckle pattern. Results suggest that the average change in the speckle pattern measures the displacement of particles and temperature rise within the acoustic wave focal area, hence can provide mechanical and thermal properties of the medium.

  12. Diffuse X-ray scattering from benzil, C14H10O2: analysis via automatic refinement of a Monte Carlo model

    International Nuclear Information System (INIS)

    Full text: A recently developed method for fitting a Monte Carlo computer simulation model to observed single crystal diffuse X-ray scattering data has been used to study the diffuse scattering in benzil. The analysis has shown that the diffuse lines, that feature so prominently in the observed diffraction patterns, are due to strong longitudinal displacement correlations transmitted from molecule to molecule via a network of contacts involving hydrogen bonding

  13. Application of the Generalized Likelihood Ratio Test for Detecting Changes in the Mean of Multivariate GARCH Processes

    OpenAIRE

    Bodnar, Olha

    2009-01-01

    Abstract We derive several multivariate control charts to monitor the mean vector of multivariate GARCH processes under the presence of changes, by means of maximizing the generalized likelihood ratio. This presentation is rounded up by a comparative performance study based on extensive Monte Carlo simulations. An empirical illustration shows how the obtained results can be applied to real data.

  14. Coupled neutronic thermo-hydraulic analysis of full PWR core with Monte-Carlo based BGCore system

    International Nuclear Information System (INIS)

    Highlights: → New thermal-hydraulic (TH) feedback module was integrated into the MCNP based depletion system BGCore. → A coupled neutronic-TH analysis of a full PWR core was performed with the upgraded BGCore system. → The BGCore results were verified against those of 3D nodal diffusion code DYN3D. → Very good agreement in major core operational parameters between the BGCore and DYN3D results was observed. - Abstract: BGCore reactor analysis system was recently developed at Ben-Gurion University for calculating in-core fuel composition and spent fuel emissions following discharge. It couples the Monte Carlo transport code MCNP with an independently developed burnup and decay module SARAF. Most of the existing MCNP based depletion codes (e.g. MOCUP, Monteburns, MCODE) tally directly the one-group fluxes and reaction rates in order to prepare one-group cross sections necessary for the fuel depletion analysis. BGCore, on the other hand, uses a multi-group (MG) approach for generation of one group cross-sections. This coupling approach significantly reduces the code execution time without compromising the accuracy of the results. Substantial reduction in the BGCore code execution time allows consideration of problems with much higher degree of complexity, such as introduction of thermal hydraulic (TH) feedback into the calculation scheme. Recently, a simplified TH feedback module, THERMO, was developed and integrated into the BGCore system. To demonstrate the capabilities of the upgraded BGCore system, a coupled neutronic TH analysis of a full PWR core was performed. The BGCore results were compared with those of the state of the art 3D deterministic nodal diffusion code DYN3D. Very good agreement in major core operational parameters including k-eff eigenvalue, axial and radial power profiles, and temperature distributions between the BGCore and DYN3D results was observed. This agreement confirms the consistency of the implementation of the TH feedback module

  15. An Analysis of Spherical Particles Distribution Randomly Packed in a Medium for the Monte Carlo Implicit Modeling

    International Nuclear Information System (INIS)

    In this study, as a preliminary study to develop an implicit method having high accuracy, the distribution characteristics of spherical particles were evaluated by using explicit modeling techniques in various volume packing fractions. This study was performed to evaluate implicitly simulated distribution of randomly packed spheres in a medium. At first, an explicit modeling method to simulate random packed spheres in a hexahedron medium was proposed. The distributed characteristics of lp and rp, which are used in the particle position sampling, was estimated. It is analyzed that the use of the direct exponential distribution, which is generally used in the implicit modeling, can cause the distribution bias of the spheres. It is expected that the findings in this study can be utilized for improving the accuracy in using the implicit method. Spherical particles, which are randomly distributed in medium, are utilized for the radiation shields, fusion reactor blanket, fuels of VHTR reactors. Due to the difficulty on the simulation of the stochastic distribution, Monte Carlo (MC) method has been mainly considered as the tool for the analysis of the particle transport. For the MC modeling of the spherical particles, three methods are known; repeated structure, explicit modeling, and implicit modeling. Implicit method (called as the track length sampling method) is a modeling method that is the sampling based modeling technique of each spherical geometry (or track length of the sphere) during the MC simulation. Implicit modeling method has advantages in high computational efficiency and user convenience. However, it is noted that the implicit method has lower modeling accuracy in various finite mediums

  16. A quantum framework for likelihood ratios

    CERN Document Server

    Bond, Rachael L; Ormerod, Thomas C

    2015-01-01

    The ability to calculate precise likelihood ratios is fundamental to many STEM areas, such as decision-making theory, biomedical science, and engineering. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. Equally, the use of alternative statistical approaches, such as multivariate logistic regression, may be confounded by other axiomatic conditions, e.g., low levels of co-linearity. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement. In doing so, it is argued that this quantum approach demonstrates: that the likelihood ratio is a real quality of statistical systems; that the naive Bayes' classifier is a spec...

  17. Sequential Generalized Likelihood Ratio Tests for Vaccine Safety Evaluation

    OpenAIRE

    Shih, Mei-Chiung; Lai, Tze Leung; Heyse, Joseph F; Chen, Jie

    2010-01-01

    The evaluation of vaccine safety involves pre-clinical animal studies, pre-licensure randomized clinical trials and post-licensure safety studies. Sequential design and analysis are of particular interest because they allow early termination of the trial or quick detection that the vaccine exceeds a prescribed bound on the adverse event rate. After a review of recent developments in this area, we propose a new class of sequential generalized likelihood ratio tests for evaluating adverse event...

  18. RunMC - an object-oriented analysis framework for Monte Carlo simulation of high-energy particle collisions

    CERN Document Server

    Chekanov, S

    2005-01-01

    RunMC is an object-oriented framework aimed to generate and to analyse high-energy collisions of elementary particles using Monte Carlo simulations. This package, being based on C++ adopted by CERN as the main programming language for the LHC experiments, provides a common interface to different Monte Carlo models using modern physics libraries. Physics calculations (projects) can easily be loaded and saved as external modules. This simplifies the development of complicated calculations for high energy physics in large collaborations.This desktop program is open-source licensed and is available on the LINUX and Windows/Cygwin platforms.

  19. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  20. Degeneracies in sky localization determination from a spinning coalescing binary through gravitational wave observations: a Markov-chain Monte Carlo analysis for two detectors

    International Nuclear Information System (INIS)

    Gravitational-wave signals from inspirals of binary compact objects (black holes and neutron stars) are primary targets of the ongoing searches by ground-based gravitational-wave interferometers (LIGO, Virgo and GEO-600). We present parameter-estimation simulations for inspirals of black-hole-neutron-star binaries using Markov-chain Monte Carlo methods. As a specific example of the power of these methods, we consider source localization in the sky and analyze the degeneracy in it when data from only two detectors are used. We focus on the effect that the black-hole spin has on the localization estimation. We also report on a comparative Markov-chain Monte Carlo analysis with two different waveform families, at 1.5 and 3.5 post-Newtonian orders.

  1. Prompt γ-ray activation analysis of Martian analogues at the FRM II neutron reactor and the verification of a Monte Carlo planetary radiation environment model

    International Nuclear Information System (INIS)

    Planetary radiation environment modelling is important to assess the habitability of a planetary body. It is also useful when interpreting the γ-ray data produced by natural emissions from radioisotopes or prompt γ-ray activation analysis. γ-ray spectra acquired in orbit or in-situ by a suitable detector can be converted into meaningful estimates of the concentration of certain elements on the surface of a planet. This paper describes the verification of a Monte Carlo model developed using the MCNPX code at University of Leicester. The model predicts the performance of a geophysical package containing a γ-ray spectrometer operating at a depth of up to 5 m. The experimental verification of the Monte Carlo model was performed at the FRM II facility in Munich, Germany. The paper demonstrates that the model is in good agreement with the experimental data and can be used to model the performance of an in-situ γ-ray spectrometer.

  2. CosmoPMC: Cosmology Population Monte Carlo

    CERN Document Server

    Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren

    2011-01-01

    We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.

  3. Behavioral Analysis of Visitors to a Medical Institution’s Website Using Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Tani, Yuji

    2016-01-01

    Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the

  4. CMB quadrupole depression produced by early fast-roll inflation: Monte Carlo Markov chains analysis of WMAP and SDSS data

    International Nuclear Information System (INIS)

    Generically, the classical evolution of the inflaton has a brief fast-roll stage that precedes the slow-roll regime. The fast-roll stage leads to a purely attractive potential in the wave equations of curvature and tensor perturbations (while the potential is purely repulsive in the slow-roll stage). This attractive potential leads to a depression of the CMB quadrupole moment for the curvature and B-mode angular power spectra. A single new parameter emerges in this way in the early universe model: the comoving wave number k1 characteristic scale of this attractive potential. This mode k1 happens to exit the horizon precisely at the transition from the fast-roll to the slow-roll stage. The fast-roll stage dynamically modifies the initial power spectrum by a transfer function D(k). We compute D(k) by solving the inflaton evolution equations. D(k) effectively suppresses the primordial power for k1 and possesses the scaling property D(k)=Ψ(k/k1) where Ψ(x) is a universal function. We perform a Monte Carlo Markov chain analysis of the WMAP and SDSS data including the fast-roll stage and find the value k1=0.266 Gpc-1. The quadrupole mode kQ=0.242 Gpc-1 exits the horizon earlier than k1, about one-tenth of an e-fold before the end of fast roll. We compare the fast-roll fit with a fit without fast roll but including a sharp lower cutoff on the primordial power. Fast roll provides a slightly better fit than a sharp cutoff for the temperature-temperature, temperature-E modes, and E modes-E modes. Moreover, our fits provide nonzero lower bounds for r, while the values of the other cosmological parameters are essentially those of the pure ΛCDM model. We display the real space two point CTT(θ) correlator. The fact that kQ exits the horizon before the slow-roll stage implies an upper bound in the total number of e-folds Ntot during inflation. Combining this with estimates during the radiation dominated era we obtain Ntot∼66, with the bounds 62tot<82. We repeated the same

  5. Nonparametric (smoothed) likelihood and integral equations

    CERN Document Server

    Groeneboom, Piet

    2012-01-01

    We show that there is an intimate connection between the theory of nonparametric (smoothed) maximum likelihood estimators for certain inverse problems and integral equations. This is illustrated by estimators for interval censoring and deconvolution problems. We also discuss the asymptotic efficiency of the MLE for smooth functionals in these models.

  6. Growing local likelihood network: Emergence of communities

    Science.gov (United States)

    Chen, S.; Small, M.

    2015-10-01

    In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.

  7. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today and...

  8. MLEP: an R package for exploring the maximum likelihood estimates of penetrance parameters

    Directory of Open Access Journals (Sweden)

    Sugaya Yuki

    2012-08-01

    Full Text Available Abstract Background Linkage analysis is a useful tool for detecting genetic variants that regulate a trait of interest, especially genes associated with a given disease. Although penetrance parameters play an important role in determining gene location, they are assigned arbitrary values according to the researcher’s intuition or as estimated by the maximum likelihood principle. Several methods exist by which to evaluate the maximum likelihood estimates of penetrance, although not all of these are supported by software packages and some are biased by marker genotype information, even when disease development is due solely to the genotype of a single allele. Findings Programs for exploring the maximum likelihood estimates of penetrance parameters were developed using the R statistical programming language supplemented by external C functions. The software returns a vector of polynomial coefficients of penetrance parameters, representing the likelihood of pedigree data. From the likelihood polynomial supplied by the proposed method, the likelihood value and its gradient can be precisely computed. To reduce the effect of the supplied dataset on the likelihood function, feasible parameter constraints can be introduced into maximum likelihood estimates, thus enabling flexible exploration of the penetrance estimates. An auxiliary program generates a perspective plot allowing visual validation of the model’s convergence. The functions are collectively available as the MLEP R package. Conclusions Linkage analysis using penetrance parameters estimated by the MLEP package enables feasible localization of a disease locus. This is shown through a simulation study and by demonstrating how the package is used to explore maximum likelihood estimates. Although the input dataset tends to bias the likelihood estimates, the method yields accurate results superior to the analysis using intuitive penetrance values for disease with low allele frequencies. MLEP is

  9. Smoothed log-concave maximum likelihood estimation with applications

    CERN Document Server

    Chen, Yining

    2011-01-01

    We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.

  10. On the approximate maximum likelihood estimation for diffusion processes

    CERN Document Server

    Chang, Jinyuan; 10.1214/11-AOS922

    2012-01-01

    The transition density of a diffusion process does not admit an explicit expression in general, which prevents the full maximum likelihood estimation (MLE) based on discretely observed sample paths. A\\"{\\i}t-Sahalia [J. Finance 54 (1999) 1361--1395; Econometrica 70 (2002) 223--262] proposed asymptotic expansions to the transition densities of diffusion processes, which lead to an approximate maximum likelihood estimation (AMLE) for parameters. Built on A\\"{\\i}t-Sahalia's [Econometrica 70 (2002) 223--262; Ann. Statist. 36 (2008) 906--937] proposal and analysis on the AMLE, we establish the consistency and convergence rate of the AMLE, which reveal the roles played by the number of terms used in the asymptotic density expansions and the sampling interval between successive observations. We find conditions under which the AMLE has the same asymptotic distribution as that of the full MLE. A first order approximation to the Fisher information matrix is proposed.

  11. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  12. Radiation shielding analysis of a spent fuel transport cask with an actual configuration model using the Monte Carlo method - comparison with the discrete ordinates Sn method

    International Nuclear Information System (INIS)

    In order to demonstrate the features of Monte Carlo method, in comparison with the two-dimensional discrete ordinates Sn method, detailed modeling of the canister containing the fuel basket with 14 spent fuel assemblies, supplement shields located around the lower nozzles of the fuels, and the cooling fins attached on the cask body of the NFT-14P cask are performed using the Monte Carlo code MCNP 4C. Furthermore, the water level in the canister is assimilated into the present MCNP 4C calculations. For more precise modeling of the canister, the generating points of gamma rays and neutrons are simulated accurately from the fuel assemblies installed in it. The supplement shields located around the lower nozzles of the fuels are designed to be effective especially for the activation 60Co gamma rays, and the cooling fins for gamma rays in particular. As predicated, compared with the DOT 3.5 calculations, the total dose-equivalent rates with the actual configurations are reduced to approximately 30% at 1m from the upper side surface and 85% at 1m from the lower side surface, respectively. Accordingly, the employment of detailed models for the Monte Carlo calculations is essential to accomplish more reasonable shielding design of a spent fuel transport cask and an interim storage cask. Quality of the actual configuration model of the canister containing the fuel basket with 12 spent fuel assemblies has already been demonstrated by the Monte Carlo analysis with MCNP 4B, in comparison with the measured dose-equivalent rates around the TN-12A cask

  13. Monte Carlo analysis of the slightly enriched uranium-D2O critical experiment LTRIIA (AWBA Development Program)

    International Nuclear Information System (INIS)

    The Savannah River Laboratory LTRIIA slightly-enriched uranium-D2O critical experiment was analyzed with ENDF/B-IV data and the RCP01 Monte Carlo program, which modeled the entire assembly in explicit detail. The integral parameters delta25 and delta28 showed good agreement with experiment. However, calculated K/sub eff/ was 2 to 3% low, due primarily to an overprediction of U238 capture. This is consistent with results obtained in similar analyses of the H2O-moderated TRX critical experiments. In comparisons with the VIM and MCNP2 Monte Carlo programs, good agreement was observed for calculated reeaction rates in the B2=0 cell

  14. Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis

    OpenAIRE

    O. Mazzantini; F. D'Auria; M. Pecchia; Parisi, C

    2011-01-01

    The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic ana...

  15. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    Science.gov (United States)

    Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei

    2014-09-01

    Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  16. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Hui-Jun Guo

    2014-09-01

    Full Text Available Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  17. Monte Carlo Depletion Analysis of a TRU-Cermet Fuel. Design for a Sodium Cooled Fast Reactor

    International Nuclear Information System (INIS)

    Monte Carlo depletion has generally not been considered practical for designing the equilibrium cycle of a reactor. One objective of the work here was to demonstrate that recent advances in high performance computing clusters is making Monte Carlo core depletion competitive with traditional deterministic depletion methods for some applications. The application here was to a sodium fast reactor core with an innovative TRU cermet fuel type. An equilibrium cycle search was performed for a multi-batch core loading using the Monte Carlo depletion code Monteburn. A final fuel design of 38% w/o TRU with a pin radius of 0.32 cm was found to display similar operating characteristics to its metal fueled counterparts. The TRU-cermet fueled core has a smaller sodium void worth, and a less negative axial expansion coefficient. These effects result in a core with safety characteristics similar to the metal fuel design, however, the TRU consumption rate of the cermet fueled core is found to be higher than that of the metal fueled core. (authors)

  18. Shrinkage Effect in Ancestral Maximum Likelihood

    CERN Document Server

    Mossel, Elchanan; Steel, Mike

    2008-01-01

    Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a t...

  19. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    OpenAIRE

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. ...

  20. Regularized Maximum Likelihood for Intrinsic Dimension Estimation

    CERN Document Server

    Gupta, Mithun Das

    2012-01-01

    We propose a new method for estimating the intrinsic dimension of a dataset by applying the principle of regularized maximum likelihood to the distances between close neighbors. We propose a regularization scheme which is motivated by divergence minimization principles. We derive the estimator by a Poisson process approximation, argue about its convergence properties and apply it to a number of simulated and real datasets. We also show it has the best overall performance compared with two other intrinsic dimension estimators.

  1. What Determines the Likelihood of Structural Reforms?

    OpenAIRE

    Agnello, Luca; Castro, Vitor; Jalles, João Tovar; Sousa, Ricardo M.

    2014-01-01

    We use data for a panel of 60 countries over the period 1980-2005 to investigate the main drivers of the likelihood of structural reforms. We find that: (i) external debt crises are the main trigger of financial and banking reforms; (ii) inflation and banking crises are the key drivers of external capital account reforms; (iii) banking crises also hasten financial reforms; and (iv) economic recessions play an important role in promoting the necessary consensus for financial, capital, banking ...

  2. Likelihood-Based Climate Model Evaluation

    Science.gov (United States)

    Braverman, Amy; Cressie, Noel; Teixeira, Joao

    2012-01-01

    Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.

  3. Database likelihood ratios and familial DNA searching

    CERN Document Server

    Slooten, Klaas

    2012-01-01

    Familial Searching is the process of searching in a DNA database for relatives of a given individual. It is well known that in order to evaluate the genetic evidence in favour of a certain given form of relatedness between two individuals, one needs to calculate the appropriate likelihood ratio, which is in this context called a Kinship Index. Suppose that the database contains, for a given type of relative, at most one related individual. Given prior probabilities of being the relative for all persons in the database, we derive the likelihood ratio for each database member in favour of being that relative. This likelihood ratio takes all the Kinship Indices between target and members of the database into account. We also compute the corresponding posterior probabilities. We then discuss two ways of selecting a subset from the database that contains the relative with a known probability, or at least a useful lower bound thereof. We discuss the relation between these approaches and illustrate them with Familia...

  4. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  5. Predictive uncertainty analysis of a highly heterogeneous field-scale groundwater model using null-space Monte Carlo

    Science.gov (United States)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2011-12-01

    Quantification of prediction uncertainty resulting from estimated parameters is critical to provide accurate predictive models for field-scale groundwater flow and transport problems. We examine and compare two approaches to defining predictive uncertainty where both approaches utilize pilot points to parameterize spatially heterogeneous fields. The first approach is the independent calibration of multiple initial "seed" fields created through geostatistical simulation and conditioned to observation data, resulting in an ensemble of calibrated property fields that defines uncertainty in the calibrated parameters. The second approach is the null-space Monte Carlo (NSMC) method that employs a decomposition of the Jacobian matrix from a single calibration to define a minimum number of linear combinations of parameters that account for the majority of the sensitivity of the overall calibration to the observed data. Random vectors are applied to the remaining linear combinations of parameters, the null space, to create an ensemble of fields, each of which remains calibrated to the data. We compare these two approaches using a highly-parameterized groundwater model of the Culebra dolomite in southeastern New Mexico. Observation data include two decades of steady-state head measurements and pumping test results. The predictive performance measure is advective travel time from a point to a prescribed boundary. Calibrated parameters at a set of pilot points include transmissivity, the horizontal hydraulic anisotropy, the storativity, and a section of recharge (> 1200 parameters in total). First, we calibrate 200 multiple random seed fields generated through geostatistical simulation conditioned to observation data. The 11 fields that contain the best and worst scenarios in terms of calibration and travel time analysis among the best 100 calibrated results provide a basis for the NSMC method. The NSMC method is used to generate 200 calibration-constrained parameter fields

  6. Neutronic analysis for conversion of the Ghana Research Reactor-1 facility using Monte Carlo methods and UO{sub 2} LEU fuel

    Energy Technology Data Exchange (ETDEWEB)

    Anim-Sampong, S.; Akaho, E.H.K.; Maakuu, B.T.; Gbadago, J.K. [Ghana Research Reactor-1 Centre, Dept. of Nuclear Engineering and Materials Science, National Nuclear Research Institute, Ghana Atomic Energy Commission, Legon, Accra (Ghana); Andam, A. [Kwame Nkrumah Univ. of Science and Technology, Dept. of Physics (Ghana); Liaw, J.J.R.; Matos, J.E. [Argonne National Lab., RERTR Programme, Div. of Nuclear Engineering (United States)

    2007-07-01

    Monte Carlo particle transport methods and software (MCNP) have been applied to the modelling, simulation and neutronic analysis for the conversion of the HEU-fuelled (high enrichment uranium) core of the Ghana Research Reactor-1 (GHARR-1) facility. The results show that the MCNP model of the GHARR-1 facility, which is a commercial version of the Miniature Neutron Source Reactor (MNSR) is good as the simulated neutronic and other reactor physics parameters agree with very well with experimental and zero power results. Three UO{sub 2} LEU (low enrichment uranium) fuels with different enrichments (12.6% and 19.75%), core configurations, core loadings were utilized in the conversion studies. The nuclear criticality and kinetic parameters obtained from the Monte Carlo simulation and neutronic analysis using three UO{sub 2} LEU fuels are in close agreement with results obtained for the reference 90.2% U-Al HEU core. The neutron flux variation in the core, fission chamber and irradiation channels for the LEU UO{sub 2} fuels show the same trend as the HEU core as presented in the paper. The Monte Carlo model confirms a reduction (8% max) in the peak neutron fluxes simulated in the irradiation channels which are utilized for experimental and commercial activities. However, the reductions or 'losses' in the flux levels neither affects the criticality safety, reactor operations and safety nor utilization of the reactor. Employing careful core loading optimization techniques and fuel loadings and enrichment, it is possible to eliminate the apparent reductions or 'losses' in the neutron fluxes as suggested in this paper. Concerning neutronics, it can be concluded that all the 3 LEU fuels qualify as LEU candidates for core conversion of the GHARR-1 facility.

  7. Analysis of low pressure electro-positive and electro-negative rf plasmas with Monte Carlo method

    OpenAIRE

    M. ARDEHALI

    1998-01-01

    Particle-in-cell/Monte Carlo technique is used to simulate low pressure electro-negative and electro-positive plasmas at a frequency of 10 MHz. The potential, electric field, electron and ion density, and currents flowing across the plasma are presented. To compare the physical properties of the electro-positive gas with those of an electro-negative gas, the input voltage was decreased from 1000 Volts to 350 Volts. The simulation results indicate that the introduction of negative ions induces...

  8. A combined Monte Carlo and experimental analysis of light emission phenomena in AlGaAs/GaAs HBTs

    Science.gov (United States)

    Di Carlo, Aldo; Lugli, Paolo; Canali, Claudio; Malik, Roger; Manfredi, Manfredo; Neviani, Andrea; Zanoni, Enrico; Zandler, Günther

    1998-08-01

    We present a detailed investigation of light emission phenomena connected with the presence of hot carriers in AlGaAs/GaAs heterojunction bipolar transistors. Electrons heated by the strong electric field at the base-collector junction lead to both impact ionization and light emission. A new general-purpose weighted Monte Carlo procedure has been developed to study such effects. The measured hot electroluminescence is attributed to radiative recombinations within the valence and the conduction bands. Good agreement is found between theory and experiment.

  9. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    International Nuclear Information System (INIS)

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements

  10. Monte Carlo perturbation analysis on isothermal temperature reactivity coefficient of light-water moderated and reflected critical assembly

    International Nuclear Information System (INIS)

    Experiments have been carried out on the isothermal temperature reactivity coefficient (ITRC) for the light-water moderated core at the Kyoto University Critical Assembly. The temperature effect on reactivity is analyzed by the Seoul National University Monte Carlo (MC) code, McCARD, which well reproduce experimental data. The contributions of the each isotope by the density changes of the core and reflector regions and the microscopic cross section changes to the ITRCs are quantified by sensitivity analyses based on the MC adjoint-weighted perturbation methods. (author)

  11. An analysis of the OI 1304 A dayglow using a Monte Carlo resonant scattering model with partial frequency redistribution

    Science.gov (United States)

    Meier, R. R.; Lee, J.-S.

    1982-01-01

    The transport of resonance radiation under optically thick conditions is shown to be accurately described by a Monte Carlo model of the atomic oxygen 1304 A airglow triplet in which partial frequency redistribution, temperature gradients, pure absorption and multilevel scattering are accounted for. All features of the data can be explained by photoelectron impact excitation and the resonant scattering of sunlight, where the latter source dominates below 100 and above 500 km and is stronger at intermediate altitudes than previously thought. It is concluded that the OI 1304 A emission can be used in studies of excitation processes and atomic oxygen densities in planetary atmospheres.

  12. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

    International Nuclear Information System (INIS)

    The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered

  13. On divergences tests for composite hypotheses under composite likelihood

    OpenAIRE

    Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos

    2016-01-01

    It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...

  14. Monte Carlo optimization of sample dimensions of an {sup 241}Am-Be source-based PGNAA setup for water rejects analysis

    Energy Technology Data Exchange (ETDEWEB)

    Idiri, Z. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria)]. E-mail: zmidiri@yahoo.fr; Mazrou, H. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria); Beddek, S. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria); Amokrane, A. [Faculte de Physique, Universite des Sciences et de la Technologie Houari-Boumediene (USTHB), Alger (Algeria); Azbouche, A. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria)

    2007-07-21

    The present paper describes the optimization of sample dimensions of a {sup 241}Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  15. The variability in likelihood ratios due to different mechanisms.

    Science.gov (United States)

    Bright, Jo-Anne; Stevenson, Kate E; Curran, James M; Buckleton, John S

    2015-01-01

    Recently there has been a drive towards standardisation of forensic DNA interpretation methods resulting in the uptake of probabilistic interpretation software. Some of these software solutions utilise Markov chain Monte Carlo techniques (MCMC). They will not produce an identical answer after repeat interpretations of the same evidence profile because of the Monte Carlo aspect. This is a new source of variability within the forensic DNA analysis process. In this paper we explore the size of the MCMC variability within the interpretation software STRmix™ compared to other sources of variability in forensic DNA profiling including PCR, capillary electrophoresis load and injection, and the makeup of allele frequency databases. The MCMC variability within STRmix™ was shown to be the smallest source of variability in this process. PMID:25450791

  16. On the use of pseudo-likelihoods in Bayesian variable selection.

    OpenAIRE

    Racugno, Walter; Salvan, Alessandra; Ventura, Laura

    2005-01-01

    In the presence of nuisance parameters, we discuss a one-parameter Bayesian analysis based on a pseudo-likelihood assuming a default prior distribution for the parameter of interest only. Although this way to proceed cannot always be considered as orthodox in the Bayesian perspective, it is of interest to evaluate whether the use of suitable pseudo-likelihoods may be proposed for Bayesian inference. Attention is focused in the context of regression models, in particular on inference about a s...

  17. The Self-Gravitating Gas in the Presence of Dark Energy: Monte-Carlo Simulations and Stability Analysis

    CERN Document Server

    De Vega, H J

    2004-01-01

    The self-gravitating gas in the presence of a positive cosmological constant Lambda is studied in thermal equilibrium by Monte Carlo simulations and by the mean field approach. We find excellent agreement between both approaches already for N = 1000 particles on a volume $V$ [The mean field is exact in the infinite N limit]. The domain of stability of the gas is found to increase when the cosmological constant increases. The particle density is shown to be an increasing (decreasing) function of the distance when the dark energy dominates over self-gravity (and vice-versa).We confirm the validity of the thermodynamic limit: N, V -> infty with N/V^{1/3} and Lambda V^{2/3} fixed. In such dilute limit extensive thermodynamic quantities like energy, free energy, entropy turn to be proportional to N. We find that the gas is stable till the isothermal compressibility diverges. Beyond this point the gas becomes a extremely dense object whose properties are studied by Monte Carlo.

  18. Monte Carlo simulation and Boltzmann equation analysis of non-conservative positron transport in H{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Bankovic, A., E-mail: ana.bankovic@gmail.com [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Dujko, S. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Centrum Wiskunde and Informatica (CWI), P.O. Box 94079, 1090 GB Amsterdam (Netherlands); ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); White, R.D. [ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); Buckman, S.J. [ARC Centre for Antimatter-Matter Studies, Australian National University, Canberra, ACT 0200 (Australia); Petrovic, Z.Lj. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia)

    2012-05-15

    This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n{sub 0} are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n{sub 0} the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H{sub 2} under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.

  19. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    . This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and......In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics...

  20. Quantifying and reducing uncertainty in life cycle assessment using the Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    The traditional life cycle assessment (LCA) does not perform quantitative uncertainty analysis. However, without characterizing the associated uncertainty, the reliability of assessment results cannot be understood or ascertained. In this study, the Bayesian method, in combination with the Monte Carlo technique, is used to quantify and update the uncertainty in LCA results. A case study of applying the method to comparison of alternative waste treatment options in terms of global warming potential due to greenhouse gas emissions is presented. In the case study, the prior distributions of the parameters used for estimating emission inventory and environmental impact in LCA were based on the expert judgment from the intergovernmental panel on climate change (IPCC) guideline and were subsequently updated using the likelihood distributions resulting from both national statistic and site-specific data. The posterior uncertainty distribution of the LCA results was generated using Monte Carlo simulations with posterior parameter probability distributions. The results indicated that the incorporation of quantitative uncertainty analysis into LCA revealed more information than the deterministic LCA method, and the resulting decision may thus be different. In addition, in combination with the Monte Carlo simulation, calculations of correlation coefficients facilitated the identification of important parameters that had major influence to LCA results. Finally, by using national statistic data and site-specific information to update the prior uncertainty distribution, the resultant uncertainty associated with the LCA results could be reduced. A better informed decision can therefore be made based on the clearer and more complete comparison of options

  1. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  2. Developmental Changes in Children's Understanding of Future Likelihood and Uncertainty

    Science.gov (United States)

    Lagattuta, Kristin Hansen; Sayfan, Liat

    2011-01-01

    Two measures assessed 4-10-year-olds' and adults' (N = 201) understanding of future likelihood and uncertainty. In one task, participants sequenced sets of event pictures varying by one physical dimension according to increasing future likelihood. In a separate task, participants rated characters' thoughts about the likelihood of future events,…

  3. Using Monte Carlo transport to accurately predict isotope production and activation analysis rates at the University of Missouri research reactor

    International Nuclear Information System (INIS)

    A detailed Monte Carlo N-Particle Transport Code (MCNP5) model of the University of Missouri research reactor (MURR) has been developed. The ability of the model to accurately predict isotope production rates was verified by comparing measured and calculated neutron- capture reaction rates for numerous isotopes. In addition to thermal (1/v) monitors, the benchmarking included a number of isotopes whose (n, γ) reaction rates are very sensitive to the epithermal portion of the neutron spectrum. Using the most recent neutron libraries (ENDF/ B-VII.0), the model was able to accurately predict the measured reaction rates in all cases. The model was then combined with ORIGEN 2.2, via MONTEBURNS 2.0, to calculate production of 99Mo from fission of low-enriched uranium foils. The model was used to investigate both annular and plate LEU foil targets in a variety of arrangements in a graphite irradiation wedge to optimize the production of 99Mo. (author)

  4. Experimental and Monte Carlo analysis of near-breakdown phenomena in GaAs-based heterostructure FETs

    Science.gov (United States)

    Sleiman, A.; Di Carlo, A.; Tocca, L.; Lugli, P.; Zandler, G.; Meneghesso, G.; Zanoni, E.; Canali, C.; Cetronio, A.; Lanzieri, M.; Peroni, M.

    2001-05-01

    We present experimental and theoretical data related to the impact ionization in the near-breakdown regime of AlGaAs/InGaAs pseudomorphic high-electron-mobility transistors (P-HEMTs) and AlGaAs/GaAs heterostructure field effect transistors (HFETs). Room-temperature electroluminescence spectra of P-HEMT exhibit a maximum around the InGaAs energy gap (1.3 eV). Two peaks have been observed for the HFETs. These experiments are interpreted by means of Monte Carlo simulations. The most important differences between the two devices are found in the hole distribution. While the holes in the P-HEMT are confined in the gate-source channel region and responsible for the breakdown, they are absent from the active part of the HFET. This absence reduces the feedback and improves the on-state breakdown voltage characteristics.

  5. Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis

    Directory of Open Access Journals (Sweden)

    M. Pecchia

    2011-01-01

    Full Text Available The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic analyses since it was necessary to correctly evaluate the effect of each oblique control rod in each cell discretizing the reactor. These corrective factors were then applied to the cell cross sections calculated by the two-dimensional deterministic lattice physics code HELIOS. These results were implemented in the RELAP-3D model to perform safety analyses for the licensing process.

  6. Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis

    International Nuclear Information System (INIS)

    The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic analyses since it was necessary to correctly evaluate the effect of each oblique control rod in each cell discretizing the reactor. These corrective factors were then applied to the cell cross sections calculated by the two-dimensional deterministic lattice physics code Helios. These results were implemented in the RELAP-3D model to perform safety analyses for the licensing process.

  7. Periodic structures in the Franck-Hertz experiment with neon: Boltzmann equation and Monte-Carlo analysis

    Science.gov (United States)

    White, R. D.; Robson, R. E.; Nicoletopoulos, P.; Dujko, S.

    2012-05-01

    The Franck-Hertz experiment with neon gas is modelled as an idealised steady-state Townsend experiment and analysed theoretically using (a) multi-term solution of Boltzmann equation and (b) Monte-Carlo simulation. Theoretical electron periodic electron structures, together with the `window' of reduced fields in which they occur, are compared with experiment, and it is explained why it is necessary to account for all competing scattering processes in order to explain the observed experimental `wavelength'. The study highlights the fundamental flaws in trying to explain the observations in terms of a single, assumed dominant electronic excitation process, as is the case in text books and the myriad of misleading web sites.

  8. An Analysis on the Calculation Efficiency of the Responses Caused by the Biased Adjoint Fluxes in Hybrid Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of)

    2015-05-15

    This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies.

  9. On-board Dose Measurement and its Monte Carlo Analysis in a Low Level Waste Shipping Vessel

    International Nuclear Information System (INIS)

    On-board dose measurements were made in a shipping vessel for low level radioactive wastes, the Seiei Maru. The measured values are much smaller than the regulation values both on the hatch covers and in the accommodation area. The dose equivalent rates on the hatch cover are analysed by using a continuous energy Monte Carlo code, MCNP 4B, with two kinds of calculational models. One is the detailed model with the geometry of containers and LLW drums, and an asymmetrical source distribution. The results of the detailed calculation approached the shape of the measured dose rate distribution graphs. The other is the simplified model that mixes source volume uniformly. The calculated values obtained with the simplified model are twice as large as those calculated with the detailed model. (author)

  10. Evaluation of likelihood functions on CPU and GPU devices

    International Nuclear Information System (INIS)

    We describe parallel implementations of an algorithm used to evaluate the likelihood function used in data analysis. The implementations run, respectively, on CPU and GPU, and both devices cooperatively (hybrid). CPU and GPU implementations are based on OpenMP and OpenCL, respectively. The hybrid implementation allows the application to run also on multi-GPU systems (not necessarily of the same type). The hybrid case uses a scheduler so that the workload needed for the evaluation of function is split and balanced in corresponding sub-workloads to be executed in parallel on each device, i. e. CPU-GPU or multi-CPUs. We present the results of the scalability when running on CPU. Then we show the comparison of the performance of the GPU implementation on different hardware systems from different vendors, and the performance when running in the hybrid case. The tests are based on likelihood functions from real data analysis carried out in the high energy physics community.

  11. Modified Signed Log-Likelihood Ratio Test for Comparing the Correlation Coefficients of Two Independent Bivariate Normal Distributions

    OpenAIRE

    Kazemi, M. R.; Jafari, A A

    2016-01-01

    In this paper, we use the method of modified signed log-likelihood ratio test for the problem of testing the equality of correlation coefficients in two independent bivariate normal distributions. We compare this method with two other %competing approaches, Fisher's Z-transform and generalized test variable, using a Monte Carlo simulation. It indicates that the proposed method is better than the other approaches, in terms of the actual sizes and powers especially when the sample sizes are une...

  12. A Maximum Likelihood Estimator based on First Differences for a Panel Data Tobit Model with Individual Specific Effects

    OpenAIRE

    A.S. Kalwij

    2000-01-01

    This paper proposes an alternative estimation procedure for a panel data Tobit model with individual specific effects based on taking first differences of the equation of interest. This helps to alleviate the sensitivity of the estimates to a specific parameterization of the individual specific effects and some Monte Carlo evidence is provided in support of this. To allow for arbitrary serial correlation estimation takes place in two steps: Maximum Likelihood is applied to each pair of consec...

  13. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  14. Groups, information theory, and Einstein's likelihood principle

    Science.gov (United States)

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.

  15. Dishonestly increasing the likelihood of winning

    Directory of Open Access Journals (Sweden)

    Shaul Shalvi

    2012-05-01

    Full Text Available People not only seek to avoid losses or secure gains; they also attempt to create opportunities for obtaining positive outcomes. When distributing money between gambles with equal probabilities, people often invest in turning negative gambles into positive ones, even at a cost of reduced expected value. Results of an experiment revealed that (1 the preference to turn a negative outcome into a positive outcome exists when people's ability to do so depends on their performance levels (rather than merely on their choice, (2 this preference is amplified when the likelihood to turn negative into positive is high rather than low, and (3 this preference is attenuated when people can lie about their performance levels, allowing them to turn negative into positive not by performing better but rather by lying about how well they performed.

  16. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the...... equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...... any influence on the long-run relationship. The rate of convergence of the estimators of the long-run relationships depends on the coin- tegration degree but it is optimal for the strong cointegration case considered. We also prove that misspecification of the degree of fractional cointegation does...

  17. SPECIAL ISSUE DEVOTED TO MULTIPLE RADIATION SCATTERING IN RANDOM MEDIA: Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    Science.gov (United States)

    Bashkatov, A. N.; Genina, Elina A.; Kochubei, V. I.; Tuchin, Valerii V.

    2006-12-01

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates.

  18. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  19. Cosmological Parameters from CMB Maps without Likelihood Approximation

    Science.gov (United States)

    Racine, B.; Jewell, J. B.; Eriksen, H. K.; Wehus, I. K.

    2016-03-01

    We propose an efficient Bayesian Markov chain Monte Carlo (MCMC) algorithm for estimating cosmological parameters from cosmic microwave background (CMB) data without the use of likelihood approximations. It builds on a previously developed Gibbs sampling framework that allows for exploration of the joint CMB sky signal and power spectrum posterior, P({\\boldsymbol{s}},{C}{\\ell }| {\\boldsymbol{d}}), and addresses a long-standing problem of efficient parameter estimation simultaneously in regimes of high and low signal-to-noise ratio. To achieve this, our new algorithm introduces a joint Markov chain move in which both the signal map and power spectrum are synchronously modified, by rescaling the map according to the proposed power spectrum before evaluating the Metropolis-Hastings accept probability. Such a move was already introduced by Jewell et al., who used it to explore low signal-to-noise posteriors. However, they also found that the same algorithm is inefficient in the high signal-to-noise regime, since a brute-force rescaling operation does not account for phase information. This problem is mitigated in the new algorithm by subtracting the Wiener filter mean field from the proposed map prior to rescaling, leaving high signal-to-noise information invariant in the joint step, and effectively only rescaling the low signal-to-noise component. To explore the full posterior, the new joint move is then interleaved with a standard conditional Gibbs move for the sky map. We apply our new algorithm to simplified simulations for which we can evaluate the exact posterior to study both its accuracy and its performance, and find good agreement with the exact posterior; marginal means agree to ≲0.006σ and standard deviations to better than ˜3%. The Markov chain correlation length is of the same order of magnitude as those obtained by other standard samplers in the field.

  20. LikeDM: likelihood calculator of dark matter detection

    CERN Document Server

    Huang, Xiaoyuan; Yuan, Qiang

    2016-01-01

    With the large progresses of searching for dark matter (DM) particles from indirect and direct methods, we develop a numerical tool which enables fast calculation of the likelihood of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), $\\gamma$-rays from Fermi space telescope, and the underground direct detection experiments. The purpose of this tool, \\likedm\\ --- likelihood calculator of dark matter detection, is to bridge the particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi $\\gamma$-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints of charged cosmic and gamma rays and the direct detection part will be implemented in the next version. This manual de...

  1. tmle : An R Package for Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Susan Gruber

    2012-11-01

    Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.

  2. The Multi-Mission Maximum Likelihood framework (3ML)

    CERN Document Server

    Vianello, Giacomo; Younk, Patrick; Tibaldo, Luigi; Burgess, James M; Ayala, Hugo; Harding, Patrick; Hui, Michelle; Omodei, Nicola; Zhou, Hao

    2015-01-01

    Astrophysical sources are now observed by many different instruments at different wavelengths, from radio to high-energy gamma-rays, with an unprecedented quality. Putting all these data together to form a coherent view, however, is a very difficult task. Each instrument has its own data format, software and analysis procedure, which are difficult to combine. It is for example very challenging to perform a broadband fit of the energy spectrum of the source. The Multi-Mission Maximum Likelihood framework (3ML) aims to solve this issue, providing a common framework which allows for a coherent modeling of sources using all the available data, independent of their origin. At the same time, thanks to its architecture based on plug-ins, 3ML uses the existing official software of each instrument for the corresponding data in a way which is transparent to the user. 3ML is based on the likelihood formalism, in which a model summarizing our knowledge about a particular region of the sky is convolved with the instrument...

  3. Use of Monte Carlo simulation for computational analysis of critical systems on IPPE's facility addressing needs of nuclear safety

    Energy Technology Data Exchange (ETDEWEB)

    Pavlova, Olga; Tsibulya, Anatoly [FSUE ' SSC RF-IPPE' , 249033, Bondarenko Square 1, Obninsk (Russian Federation)

    2008-07-01

    The critical facility BFS-1 critical facility was built at the Institute of Physics and Power Engineering (Obninsk, Russia) for full-scale modeling of fast-reactor cores, blankets, in-vessel shielding, and storage. Whereas BFS-1 is a fast-reactor assembly; however, it is a very flexible assembly that can easily be reconfigured to represent numerous other types of reactor designs. This paper describes specific problems with calculation of evaluation neutron physics characteristics of integral experiments performed on BFS facility. The analysis available integral experiments performed on different critical configuration of BFS facility were performed. Calculations of criticality, central reaction rate ratios, and fission rate distributions were carried out by the MCNP5 Monte-Carlo code with different files of evaluated nuclear data. MCNP calculations with multigroup library with 299 energy groups were also made for comparison with pointwise library calculations. (authors)

  4. The use of Monte-Carlo simulation and order statistics for uncertainty analysis of a LBLOCA transient (LOFT-L2-5)

    International Nuclear Information System (INIS)

    Best estimate computer codes are increasingly used in nuclear industry for the accident management procedures and have been planned to be used for the licensing procedures. Contrary to conservative codes which are supposed to give penalizing results, best estimate codes attempt to calculate accidental transients in a realistic way. It becomes therefore of prime importance, in particular for technical organization as IRSN in charge of safety assessment, to know the uncertainty on the results of such codes. Thus, CSNI has sponsored few years ago (published in 1998) the Uncertainty Methods Study (UMS) program on uncertainty methodologies used for a SBLOCA transient (LSTF-CL-18) and is now supporting the BEMUSE program for a LBLOCA transient (LOFT-L2-5). The large majority of BEMUSE participants (9 out of 10) use uncertainty methodologies based on a probabilistic modelling and all of them use Monte-Carlo simulations to propagate the uncertainties through their computer codes. Also, all of 'probabilistic participants' intend to use order statistics to determine the sampling size of the Monte-Carlo simulation and to derive the uncertainty ranges associated to their computer calculations. The first aim of this paper is to remind the advantages and also the assumptions of the probabilistic modelling and more specifically of order statistics (as Wilks' formula) in uncertainty methodologies. Indeed Monte-Carlo methods provide flexible and extremely powerful techniques for solving many of the uncertainty propagation problems encountered in nuclear safety analysis. However it is important to keep in mind that probabilistic methods are data intensive. That means, probabilistic methods cannot produce robust results unless a considerable body of information has been collected. A main interest of the use of order statistics results is to allow to take into account an unlimited number of uncertain parameters and, from a restricted number of code calculations to provide statistical

  5. Monte Carlo design of a system for the detection of explosive materials and analysis of the dose

    International Nuclear Information System (INIS)

    The problems associated with insecurity and terrorism have forced to designing systems for detecting nuclear materials, drugs and explosives that are installed on roads, ports and airports. Organic materials are composed of C, H, O and N; similarly the explosive materials are manufactured which can be distinguished by the concentration of these elements. Its elemental composition, particularly the concentration of hydrogen and oxygen, allow distinguish them from other organic substances. When these materials are irradiated with neutrons nuclear reactions (n, γ) are produced, where the emitted photons are ready gamma rays whose energy is characteristic of each element and its abundance allows estimating their concentration. The aim of this study was designed using Monte Carlo methods a system with neutron source, gamma rays detector and moderator able to distinguish the presence of Rdx and urea. In design were used as moderators: paraffin, light water, polyethylene and graphite; as detectors were used HPGe and the NaI(Tl). The design that showed the best performance was the moderator of light water and HPGe, with a source of 241AmBe. For this design, the values of ambient dose equivalent around the system were calculated. (Author)

  6. Spent-fuel assay performance and Monte Carlo Analysis of the Rensselaer slowing-down-time spectrometer

    International Nuclear Information System (INIS)

    The slowing-down-time method for the nondestructive assay of light water reactor (LWR) spent fuel is under development at Rensselaer Polytechnic Institute. A series of assay measurements of an LWR fuel assembly replica were carried out at the Rensselaer lead slowing-down-time spectrometer facility by using 238U and 232Th threshold fission detectors and 235U and 239Pu probe chambers. An assay model relating the assay signal and the signals of the probe chambers to the unknown masses of the fissile isotopes in the fuel assembly was developed. The probe chamber data were used to provide individual fission counting spectra of 235U and 239Pu inside the fuel assembly and to simulate spent-fuel assay signals. The fissile isotopic contents of the fuel were determined to better than 1%. Monte Carlo analyses were performed to simulate the experimental measurements, determine certain parameters of the assay system, and investigate the effect of the fuel assembly and hydrogen impurities on the performance of the system. The broadened resolution of the system caused by the presence of the fuel was still found to be sufficient for the accurate and separate assay of the uranium and plutonium fissiles in spent fuel

  7. Analysis of the TRIGA Mark-II benchmark IEU-COMP-THERM-003 with Monte Carlo code MVP

    International Nuclear Information System (INIS)

    The benchmark experiments of the TRIGA Mark-II reactor in the ICSBEP handbook have been analyzed with the Monte Carlo code MVP using the cross section libraries based on JENDL-3.3, JENDL-3.2 and ENDF/B-VI.8. The MCNP calculations have been also performed with the ENDF/B-VI.6 library for comparison between the MVP and MCNP results. For both cores labeled 132 and 133, which have different core configurations, the ratio of the calculated to the experimental results (C/E) for keff obtained by the MVP code is 0.999 for JENDL-3.3, 1.003 for JENDL-3.2, and 0.998 for ENDF/B-VI.8. For the MCNP code, the C/E values are 0.998 for both Core 132 and 133. All the calculated results agree with the reference values within the experimental uncertainties. The results obtained by MVP with ENDF/B-VI.8 and MCNP with ENDF/B-VI.6 differ only by 0.02% for Core 132, and by 0.01% for Core 133. (author)

  8. Monte Carlo model for the analysis and development of III-V Tunnel-FETs and Impact Ionization-MOSFETs

    Science.gov (United States)

    Talbo, V.; Mateos, J.; González, T.; Lechaux, Y.; Wichmann, N.; Bollaert, S.; Vasallo, B. G.

    2015-10-01

    Impact-ionization metal-oxide-semiconductor FETs (I-MOSFETs) are in competition with tunnel FETs (TFETs) in order to achieve the best behaviour for low power logic circuits. Concretely, III-V I-MOSFETs are being explored as promising devices due to the proper reliability, since the impact ionization events happen away from the gate oxide, and the high cutoff frequency, due to high electron mobility. To facilitate the design process from the physical point of view, a Monte Carlo (MC) model which includes both impact ionization and band-to-band tunnel is presented. Two ungated InGaAs and InAlAs/InGaAs 100 nm PIN diodes have been simulated. In both devices, the tunnel processes are more frequent than impact ionizations, so that they are found to be appropriate for TFET structures and not for I- MOSFETs. According to our simulations, other narrow bandgap candidates for the III-V heterostructure, such as InAs or GaSb, and/or PININ structures must be considered for a correct I-MOSFET design.

  9. Benchmark analysis of reactivity experiment in the TRIGA Mark 2 reactor using a continuous energy Monte Carlo code MCNP

    International Nuclear Information System (INIS)

    A good model on experimental data (criticality, control rod worth, and fuel element worth distributions) is encouraged to provide from the Musashi-TRIGA Mark 2 reactor. In the previous paper, as the keff values for different fuel loading patterns had been provided ranging from the minimum core to the full one, the data would be candidate for an ICSBEP evaluation. Evaluation of the control rod worth and fuel element worth distributions presented in this paper could be an excellent benchmark data applicable for validation of calculation technique used in the field of modern research reactor. As a result of simulation on the TRIGA-2 benchmark experiment, which was performed by three-dimensional continuous-energy Monte Carlo code (MCNP4A), it was found that the MCNP calculated values of control rod worth were consisted to the experimental data for both rod-drop and period methods. And for the fuel and the graphite element worth distributions, the MCNP calculated values agreed well with the measured ones though consideration of real control rod positions was needed for calculating fuel element reactivity positioned in inner ring. (G.K.)

  10. Experimental and Monte-Carlo absolute efficiency calibration of HPGE γ-ray spectrometer for application in neutron activation analysis

    International Nuclear Information System (INIS)

    High Purity Germanium (HPGe) detector is widely used to measure the γ-rays from neutron activated foils used for neutron spectra measurement due to its better energy resolution and photopeak efficiency. To determine the neutron induced activity in foils, it is very important to carry out absolute calibration for photo-peak efficiency in a wide range of γ-ray energy.Neutron activated foils are considered as extended γ-ray sources. The sources available for efficiency calibration are usually point sources. Therefore it is difficult to determine the photo-peak efficiency for extended sources using these point sources. A method has been developed to address this problem. This method is a combination of experimental measurement with point sources and development of an optimized model for Monte-Carlo N-Particle Code (MCNP) with the help of these experimental measurements. This MCNP model then can be used to find the photo-peak efficiency for any kind of source at any energy. (author)

  11. DS86 neutron dose: Monte Carlo analysis for depth profile of 152Eu activity in a large stone sample.

    Science.gov (United States)

    Endo, S; Iwatani, K; Oka, T; Hoshi, M; Shizuma, K; Imanaka, T; Takada, J; Fujita, S; Hasai, H

    1999-06-01

    The depth profile of 152Eu activity induced in a large granite stone pillar by Hiroshima atomic bomb neutrons was calculated by a Monte Carlo N-Particle Transport Code (MCNP). The pillar was on the Motoyasu Bridge, located at a distance of 132 m (WSW) from the hypocenter. It was a square column with a horizontal sectional size of 82.5 cm x 82.5 cm and height of 179 cm. Twenty-one cells from the north to south surface at the central height of the column were specified for the calculation and 152Eu activities for each cell were calculated. The incident neutron spectrum was assumed to be the angular fluence data of the Dosimetry System 1986 (DS86). The angular dependence of the spectrum was taken into account by dividing the whole solid angle into twenty-six directions. The calculated depth profile of specific activity did not agree with the measured profile. A discrepancy was found in the absolute values at each depth with a mean multiplication factor of 0.58 and also in the shape of the relative profile. The results indicated that a reassessment of the neutron energy spectrum in DS86 is required for correct dose estimation. PMID:10494148

  12. Medium-range order in alkali metaphosphate glasses and melts investigated by reverse Monte Carlo simulations and diffraction analysis

    International Nuclear Information System (INIS)

    Reverse Monte Carlo simulations have been performed on the alkali metaphosphate glasses Na0.5Li0.5PO3 and LiPO3 concerning structural experimental data obtained by neutron and x-ray diffraction at 300 K for both systems and versus temperature up to the melting point for the mixed composition. It appears that the contrast effect due to the negative scattering length of Li is not the only reason for the difference in the intensity of the prepeak observed in both systems. The main structural difference lies in the intermediate-range order, while the short-range order is quite similar in both systems. Moreover, it is shown that the intensity increase of the prepeak in the Na0.5Li0.5PO3 structure factor is due to the partial structure factors of the PO4 tetrahedron, sustaining the hypothesis of an ordering between several PO4 tetrahedra and voids with temperature

  13. A Monte Carlo Analysis of Weight Data from UF6 Cylinder Feed and Withdrawal Stations

    Energy Technology Data Exchange (ETDEWEB)

    Garner, James R [ORNL; Whitaker, J Michael [ORNL

    2015-01-01

    As the number of nuclear facilities handling uranium hexafluoride (UF6) cylinders (e.g., UF6 production, enrichment, and fuel fabrication) increase in number and throughput, more automated safeguards measures will likely be needed to enable the International Atomic Energy Agency (IAEA) to achieve its safeguards objectives in a fiscally constrained environment. Monitoring the process data from the load cells built into the cylinder feed and withdrawal (F/W) stations (i.e., cylinder weight data) can significantly increase the IAEA’s ability to efficiently achieve the fundamental safeguards task of confirming operations as declared (i.e., no undeclared activities). Researchers at the Oak Ridge National Laboratory, Los Alamos National Laboratory, the Joint Research Center (in Ispra, Italy), and University of Glasgow are investigating how this weight data can be used for IAEA safeguards purposes while fully protecting the operator’s proprietary and sensitive information related to operations. A key question that must be resolved is, what is the necessary frequency of recording data from the process F/W stations to achieve safeguards objectives? This paper summarizes Monte Carlo simulations of typical feed, product, and tails withdrawal cycles and evaluates longer sampling frequencies to determine the expected errors caused by low-frequency sampling and its impact on material balance calculations.

  14. Analysis of Osiris In-Core Surveillance Dosimetry for Gondole Steel Irradiation Program by Using TRIPOLI-4 Monte Carlo Code

    Science.gov (United States)

    Lee, Y. K.; Malouch, F.

    2009-08-01

    In order to assess the possibility of swelling of austenitic steels for the core internals of pressurized water reactors (PWR), a multi-year irradiation program, called GONDOLE, is ongoing in the OSIRIS material testing reactor at the CEA-Saclay site. This experiment consists in the irradiation of several density specimens at high temperature (> 350 °C). The first phase of GONDOLE irradiation run was completed in January 2006 after six reactor cycles of twenty days and the surveillance dosimetry results of the first phase were available by the end of 2006. The purpose of this paper is to present the neutron calculation methodology performed for GONDOLE program by using the continuous-energy Monte Carlo 3D-transport code TRDPOLI-4. For the specimens of virgin materials and the dosimeters located at the core mid-plane, the calculation and measurement results of the first phase of irradiation run will be presented. In addition, prediction calculation of helium gas production in the virgin materials will be introduced.

  15. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 2. Assessment of MCNP Statistical Analysis of keff Eigenvalue Convergence with an Analytical Criticality Verification Test Set

    International Nuclear Information System (INIS)

    Monte Carlo simulations of nuclear criticality eigenvalue problems are often performed by general purpose radiation transport codes such as MCNP. MCNP performs detailed statistical analysis of the criticality calculation and provides feedback to the user with warning messages, tables, and graphs. The purpose of the analysis is to provide the user with sufficient information to assess spatial convergence of the eigenfunction and thus the validity of the criticality calculation. As a test of this statistical analysis package in MCNP, analytic criticality verification benchmark problems have been used for the first time to assess the performance of the criticality convergence tests in MCNP. The MCNP statistical analysis capability has been recently assessed using the 75 multigroup criticality verification analytic problem test set. MCNP was verified with these problems at the 10-4 to 10-5 statistical error level using 40 000 histories per cycle and 2000 active cycles. In all cases, the final boxed combined keff answer was given with the standard deviation and three confidence intervals that contained the analytic keff. To test the effectiveness of the statistical analysis checks in identifying poor eigenfunction convergence, ten problems from the test set were deliberately run incorrectly using 1000 histories per cycle, 200 active cycles, and 10 inactive cycles. Six problems with large dominance ratios were chosen from the test set because they do not achieve the normal spatial mode in the beginning of the calculation. To further stress the convergence tests, these problems were also started with an initial fission source point 1 cm from the boundary thus increasing the likelihood of a poorly converged initial fission source distribution. The final combined keff confidence intervals for these deliberately ill-posed problems did not include the analytic keff value. In no case did a bad confidence interval go undetected. Warning messages were given signaling that the

  16. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions.

    Science.gov (United States)

    Barrett, Harrison H; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  17. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    Science.gov (United States)

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods.

  18. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  19. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  20. Component analysis of sodium void reactivity of step type FBR cores with group-wise Monte Carlo code 'GMVP'

    International Nuclear Information System (INIS)

    Reactivity components composing the sodium void reactivity in a FBR core are analyzed by group-wise Monte Carlo Code GMVP, which has been developed by JAEA. The typical way to analyze the reactivity components is to use the perturbation method based on the diffusion calculations, while the diffusion approximation cannot be appropriately applied to some types of FBR cores containing large cavity regions. But, in order to prospect the optimized FBR core with negative sodium void reactivity, we need to the components of the sodium void reactivity of cores which have a small void reactivity, which cores are sometimes accompanied with adjacent large cavity regions or gas plenum zones. In this study, we have employed GMVP to simulate the cavity region exactly in geometry and to evaluate the neutron behavior rigorously in reactor physics. The cross section library used is JFS-3-J3.3 70 group constant set that is complied from JENDL-3.3 library. The objective core is a 'step type' two zone core, which has a lower inner core height relative to the height of the outer core, and the upper axial blanket is eliminated to enhance the neutron leakage in the upper ward at void conditions. The reactivity component by neutron leakage is derived from the difference of the k-effective of direct calculation of GMVP between the intact and void cores, and that of non-leakage components evaluated by using real and adjoint flux that are calculated with GMVP. In the paper, the change of the contributions of the both components is presented when the core height is changed along with the void reactivity of the cores. (author)

  1. Decision-aided maximum likelihood phase estimation with optimum block length in hybrid QPSK/16QAM coherent optical WDM systems

    Science.gov (United States)

    Zhang, Yong; Wang, Yulong

    2016-01-01

    We propose a general model to entirely describe XPM effects induced by 16QAM channels in hybrid QPSK/16QAM wavelength division multiplexed (WDM) systems. A power spectral density (PSD) formula is presented to predict the statistical properties of XPM effects at the end of dispersion management (DM) fiber links. We derive the analytical expression of phase error variance for optimizing block length of QPSK channel coherent receiver with decision-aided (DA) maximum-likelihood (ML) phase estimation (PE). With our theoretical analysis, the optimum block length can be employed to improve the performance of coherent receiver. Bit error rate (BER) performance in QPSK channel is evaluated and compared through both theoretical derivation and Monte Carlo simulation. The results show that by using the DA-ML with optimum block length, bit signal-to-noise ratio (SNR) improvement over DA-ML with fixed block length of 10, 20 and 40 at BER of 10-3 is 0.18 dB, 0.46 dB and 0.65 dB, respectively, when in-line residual dispersion is 0 ps/nm.

  2. Energy dispersive X-ray fluorescence spectroscopy/Monte Carlo simulation approach for the non-destructive analysis of corrosion patina-bearing alloys in archaeological bronzes: The case of the bowl from the Fareleira 3 site (Vidigueira, South Portugal)

    International Nuclear Information System (INIS)

    Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample. - Highlights: • EDXRF/Monte Carlo simulation is used to characterize an archeological alloy. • EDXRF analysis was performed on cleaned and patina coated areas of the artifact. • EDXRF/Montes Carlo protocol is well suited when a two-layered model is considered. • When the patina is too thick, X-rays from substrate are unable to exit the sample

  3. Monte Carlo simulation for soot dynamics

    Directory of Open Access Journals (Sweden)

    Zhou Kun

    2012-01-01

    Full Text Available A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  4. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  5. Converging Stereotactic Radiotherapy Using Kilovoltage X-Rays: Experimental Irradiation of Normal Rabbit Lung and Dose-Volume Analysis With Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Purpose: To validate the feasibility of developing a radiotherapy unit with kilovoltage X-rays through actual irradiation of live rabbit lungs, and to explore the practical issues anticipated in future clinical application to humans through Monte Carlo dose simulation. Methods and Materials: A converging stereotactic irradiation unit was developed, consisting of a modified diagnostic computed tomography (CT) scanner. A tiny cylindrical volume in 13 normal rabbit lungs was individually irradiated with single fractional absorbed doses of 15, 30, 45, and 60 Gy. Observational CT scanning of the whole lung was performed every 2 weeks for 30 weeks after irradiation. After 30 weeks, histopathologic specimens of the lungs were examined. Dose distribution was simulated using the Monte Carlo method, and dose-volume histograms were calculated according to the data. A trial estimation of the effect of respiratory movement on dose distribution was made. Results: A localized hypodense change and subsequent reticular opacity around the planning target volume (PTV) were observed in CT images of rabbit lungs. Dose-volume histograms of the PTVs and organs at risk showed a focused dose distribution to the target and sufficient dose lowering in the organs at risk. Our estimate of the dose distribution, taking respiratory movement into account, revealed dose reduction in the PTV. Conclusions: A converging stereotactic irradiation unit using kilovoltage X-rays was able to generate a focused radiobiologic reaction in rabbit lungs. Dose-volume histogram analysis and estimated sagittal dose distribution, considering respiratory movement, clarified the characteristics of the irradiation received from this type of unit.

  6. Fast and Accurate Identification of Cross-Linked Peptides for the Structural Analysis of Large Protein Complexes and Elucidation of Interaction Networks. / Tahir, Salman; Bukowski-Wills, Jimi-Carlo; Rasmussen, Morten; Rappsilber, Juri

    DEFF Research Database (Denmark)

    Rasmussen, Morten

    Fast and Accurate Identification of Cross-Linked Peptides for the structural analysis of large protein complexes and to elucidate interaction networks. Salman Tahir Jimi-Carlo Bukowski-Wills; Morten Rasmussen; Juri RappsilberWellcome Trust Centre for Cell Biology, Edinburgh , United Kingdom   Novel...

  7. Vibrato Monte Carlo and the calculation of greeks

    OpenAIRE

    Keegan, Sinead

    2008-01-01

    In computational ¯nance Monte Carlo simulation can be used to calculate the correct prices of ¯nancial options, and to compute the values of the as- sociated Greeks (the derivatives of the option price with respect to certain input parameters). The main methods used for the calculation of Greeks are finite difference, likelihood ratio, and pathwise sensitivity. Each of these has its limitations and in particular the pathwise sensitivity approach may not be used for an option...

  8. Approximate maximum likelihood estimation using data-cloning ABC

    OpenAIRE

    Picchini, Umberto; Anderson, Rachele

    2015-01-01

    A maximum likelihood methodology for a general class of models is presented, using an approximate Bayesian computation (ABC) approach. The typical target of ABC methods are models with intractable likelihoods, and we combine an ABC-MCMC sampler with so-called "data cloning" for maximum likelihood estimation. Accuracy of ABC methods relies on the use of a small threshold value for comparing simulations from the model and observed data. The proposed methodology shows how to use large threshold ...

  9. Likelihood ratios: Clinical application in day-to-day practice

    Directory of Open Access Journals (Sweden)

    Parikh Rajul

    2009-01-01

    Full Text Available In this article we provide an introduction to the use of likelihood ratios in clinical ophthalmology. Likelihood ratios permit the best use of clinical test results to establish diagnoses for the individual patient. Examples and step-by-step calculations demonstrate the estimation of pretest probability, pretest odds, and calculation of posttest odds and posttest probability using likelihood ratios. The benefits and limitations of this approach are discussed.

  10. Likelihood-based inference with singular information matrix

    OpenAIRE

    Rotnitzky, Andrea; David R Cox; Bottai, Matteo; Robins, James

    2000-01-01

    We consider likelihood-based asymptotic inference for a p-dimensional parameter θ of an identifiable parametric model with singular information matrix of rank p-1 at θ=θ* and likelihood differentiable up to a specific order. We derive the asymptotic distribution of the likelihood ratio test statistics for the simple null hypothesis that θ=θ* and of the maximum likelihood estimator (MLE) of θ when θ=θ*. We show that there exists a reparametrization such that the MLE of the last p-1 components ...

  11. The Posterior Distribution of the Likelihood Ratio as a Measure of Evidence

    Science.gov (United States)

    Smith, I.; Ferrari, A.

    2011-03-01

    This paper deals with simple versus composite hypothesis testing under Bayesian and frequentist settings. The Posterior distribution of the Likelihood Ratio (PLR) concept is proposed in [1] for significance testing. The PLR is shown to be equal to 1 minus the p-value in a simple case. The PLR is used in [2] in order to calibrate p-values, Fractional Bayes Factors (FBF) and others. Dempster's equivalence result is slightly extended by adding a nuisance parameter in the test. On the other hand, in [3] the p-values and the posterior probability of the null hypothesis Pr(H0|x) (seen as a Bayesian measure of evidence against the null hypothesis) are shown to be irreconcilable. Actually, as emphasized in [4], Pr(H0|x) is a measure of accuracy of a test, not a measure of evidence in a formal sense because it does not involve the likelihood ratio. The PLR may give such a measure of evidence and be related to a natural p-value. In this presentation, in a classical invariance framework the PLR with inner threshold 1 will be shown to be equal to 1 minus a p-value where the test statistics is the likelihood, weighted by a term that accounts for some volume distorsion effect. Other analytical properties of the PLR will be proved in more general settings. The minimum of its support is equal to the Generalized Likelihood Ratio if H0 is nested in H1 and its moments are directly related to the (F)BF for a proper prior. Its relation to credible domains is also studied. Practical issues will also be considered. The PLR can be implemented using a simple Monte Carlo Markov Chain and will be applied to extrasolar planet detection using direct imaging.

  12. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    Science.gov (United States)

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  13. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques

    International Nuclear Information System (INIS)

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for ∼10 y of spent fuel pool storage, >35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are ∼6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle Transport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the S N models. The biased A 3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by ∼20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  14. The fine-tuning cost of the likelihood in SUSY models

    International Nuclear Information System (INIS)

    In SUSY models, the fine-tuning of the electroweak (EW) scale with respect to their parameters γi={m0,m1/2,μ0,A0,B0,…} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Δ of the usual likelihood L and the traditional fine-tuning measure Δ of the EW scale. A similar result is obtained for the integrated likelihood over the set {γi}, that can be written as a surface integral of the ratio L/Δ, with the surface in γi space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Δ or equivalently, a small χnew2=χold2+2lnΔ. This shows the fine-tuning cost to the likelihood (χnew2) of the EW scale stability enforced by SUSY, that is ignored in data fits. A good χnew2/d.o.f.≈1 thus demands SUSY models have a fine-tuning amount Δ≪exp(d.o.f./2), which provides a model-independent criterion for acceptable fine-tuning. If this criterion is not met, one can thus rule out SUSY models without a further χ2/d.o.f. analysis. Numerical methods to fit the data can easily be adapted to account for this effect.

  15. Particle in cell/Monte Carlo collision analysis of the problem of identification of impurities in the gas by the plasma electron spectroscopy method

    Science.gov (United States)

    Kusoglu Sarikaya, C.; Rafatov, I.; Kudryavtsev, A. A.

    2016-06-01

    The work deals with the Particle in Cell/Monte Carlo Collision (PIC/MCC) analysis of the problem of detection and identification of impurities in the nonlocal plasma of gas discharge using the Plasma Electron Spectroscopy (PLES) method. For this purpose, 1d3v PIC/MCC code for numerical simulation of glow discharge with nonlocal electron energy distribution function is developed. The elastic, excitation, and ionization collisions between electron-neutral pairs and isotropic scattering and charge exchange collisions between ion-neutral pairs and Penning ionizations are taken into account. Applicability of the numerical code is verified under the Radio-Frequency capacitively coupled discharge conditions. The efficiency of the code is increased by its parallelization using Open Message Passing Interface. As a demonstration of the PLES method, parallel PIC/MCC code is applied to the direct current glow discharge in helium doped with a small amount of argon. Numerical results are consistent with the theoretical analysis of formation of nonlocal EEDF and existing experimental data.

  16. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.

  17. Monte Carlo Radiative Transfer

    CERN Document Server

    Whitney, Barbara A

    2011-01-01

    I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.

  18. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  19. Planck intermediate results: XVI. Profile likelihoods for cosmological parameters

    DEFF Research Database (Denmark)

    Bartlett, J.G.; Cardoso, J.-F.; Delabrouille, J.;

    2014-01-01

    We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the CDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agr...

  20. The modified signed likelihood statistic and saddlepoint approximations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1992-01-01

    SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....