WorldWideScience

Sample records for carlo likelihood analysis

  1. Monte Carlo Simulations for Likelihood Analysis of the PEN experiment

    Science.gov (United States)

    Glaser, Charles; PEN Collaboration

    2017-01-01

    The PEN collaboration performed a precision measurement of the π+ ->e+νe(γ) branching ratio with the goal of obtaining a relative uncertainty of 5 ×10-4 or better at the Paul Scherrer Institute. A precision measurement of the branching ratio Γ(π -> e ν (γ)) / Γ(π -> μ ν (γ)) can be used to give mass bounds on ``new'', or non V -A, particles and interactions. This ratio also proves to be one of the most sensitive tests for lepton universality. The PEN detector consists of beam counters, an active target, a mini-time projection chamber, multi-wire proportional chamber, a plastic scintillating hodoscope, and a CsI electromagnetic calorimeter. The Geant4 Monte Carlo simulation is used to construct ultra-realistic events by digitizing energies and times, creating synthetic target waveforms, and fully accounting for photo-electron statistics. We focus on the detailed detector response to specific decay and background processes in order to sharpen the discrimination between them in the data analysis. Work supported by NSF grants PHY-0970013, 1307328, and others.

  2. A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    1996-01-01

    We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the

  3. Maximum Likelihood Analysis in the PEN Experiment

    Science.gov (United States)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  4. Empirical likelihood method in survival analysis

    CERN Document Server

    Zhou, Mai

    2015-01-01

    Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric

  5. Likelihood Analysis of Seasonal Cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren; Schaumburg, Ernst

    1999-01-01

    The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...

  6. Exact likelihood-free Markov chain Monte Carlo for elliptically contoured distributions.

    Science.gov (United States)

    Muchmore, Patrick; Marjoram, Paul

    2015-08-01

    Recent results in Markov chain Monte Carlo (MCMC) show that a chain based on an unbiased estimator of the likelihood can have a stationary distribution identical to that of a chain based on exact likelihood calculations. In this paper we develop such an estimator for elliptically contoured distributions, a large family of distributions that includes and generalizes the multivariate normal. We then show how this estimator, combined with pseudorandom realizations of an elliptically contoured distribution, can be used to run MCMC in a way that replicates the stationary distribution of a likelihood based chain, but does not require explicit likelihood calculations. Because many elliptically contoured distributions do not have closed form densities, our simulation based approach enables exact MCMC based inference in a range of cases where previously it was impossible.

  7. cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    CERN Document Server

    Ishida, E E O; Penna-Lima, M; Cisewski, J; de Souza, R S; Trindade, A M M; Cameron, E

    2015-01-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present cosmoabc, a Python ABC sampler featuring a Population Monte Carlo (PMC) variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled cosmoabc with the numcosmo library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. cosmoabc is published under the GPLv3 license on PyPI and GitHub and documentation is availabl...

  8. Likelihood analysis of the I(2) model

    DEFF Research Database (Denmark)

    Johansen, Søren

    1997-01-01

    The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum...... likelihood estimator is proved, and the asymptotic distribution of the maximum likelihood estimator is given. It is shown that the asymptotic distribution is either Gaussian, mixed Gaussian or, in some cases, even more complicated....

  9. Likelihood analysis of earthquake focal mechanism distributions

    CERN Document Server

    Kagan, Y Y

    2014-01-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...

  10. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.

    2017-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  11. Likelihood Analysis of Supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY; Costa, J. C. [Imperial Coll., London; Sakurai, K. [Warsaw U.; Borsato, M. [Santiago de Compostela U.; Buchmueller, O. [Imperial Coll., London; Cavanaugh, R. [Illinois U., Chicago; Chobanova, V. [Santiago de Compostela U.; Citron, M. [Imperial Coll., London; De Roeck, A. [Antwerp U.; Dolan, M. J. [Melbourne U.; Ellis, J. R. [King' s Coll. London; Flächer, H. [Bristol U.; Heinemeyer, S. [Madrid, IFT; Isidori, G. [Zurich U.; Lucio, M. [Santiago de Compostela U.; Martínez Santos, D. [Santiago de Compostela U.; Olive, K. A. [Minnesota U., Theor. Phys. Inst.; Richards, A. [Imperial Coll., London; de Vries, K. J. [Imperial Coll., London; Weiglein, G. [DESY

    2016-10-31

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel ${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ${\\tilde \

  12. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; De Roeck, A.; Dolan, M.J.; Ellis, J.R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Martínez Santos, D.; Olive, K.A.; Richards, A.; de Vries, K.J.; Weiglein, G.

    2016-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  13. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  14. Generalized likelihood uncertainty estimation (GLUE) using adaptive Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik

    2008-01-01

    estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty...

  15. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd

  16. Joint analysis of prevalence and incidence data using conditional likelihood.

    Science.gov (United States)

    Saarela, Olli; Kulathinal, Sangita; Karvanen, Juha

    2009-07-01

    Disease prevalence is the combined result of duration, disease incidence, case fatality, and other mortality. If information is available on all these factors, and on fixed covariates such as genotypes, prevalence information can be utilized in the estimation of the effects of the covariates on disease incidence. Study cohorts that are recruited as cross-sectional samples and subsequently followed up for disease events of interest produce both prevalence and incidence information. In this paper, we make use of both types of information using a likelihood, which is conditioned on survival until the cross section. In a simulation study making use of real cohort data, we compare the proposed conditional likelihood method to a standard analysis where prevalent cases are omitted and the likelihood expression is conditioned on healthy status at the cross section.

  17. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  18. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    Science.gov (United States)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  19. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    Science.gov (United States)

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  20. CUSUM control charts based on likelihood ratio for preliminary analysis

    Institute of Scientific and Technical Information of China (English)

    Yi DAI; Zhao-jun WANG; Chang-liang ZOU

    2007-01-01

    To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method.Sullivan and woodall pointed out the test statistic lrt (n1, n2) is approximately distributed as x2 (2) as the sample size n, n1 and n2 are very large, and the value of n1 = 2, 3,..., n- 2 and that of n2 = n- n1.So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained.In addition, the properties of the standardized likelihood ratio statistic slr(n1,n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i ≠ n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both.Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.

  1. CUSUM control charts based on likelihood ratio for preliminary analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.

  2. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    CERN Document Server

    Agnese, R; Balakishiyeva, D; Thakur, R Basu; Bauer, D A; Billard, J; Borgland, A; Bowles, M A; Brandt, D; Brink, P L; Bunker, R; Cabrera, B; Caldwell, D O; Cerdeno, D G; Chagani, H; Chen, Y; Cooley, J; Cornell, B; Crewdson, C H; Cushman, P; Daal, M; Di Stefano, P C F; Doughty, T; Esteban, L; Fallows, S; Figueroa-Feliciano, E; Fritts, M; Godfrey, G L; Golwala, S R; Graham, M; Hall, J; Harris, H R; Hertel, S A; Hofer, T; Holmgren, D; Hsu, L; Huber, M E; Jastram, A; Kamaev, O; Kara, B; Kelsey, M H; Kennedy, A; Kiveni, M; Koch, K; Leder, A; Loer, B; Asamar, E Lopez; Mahapatra, R; Mandic, V; Martinez, C; McCarthy, K A; Mirabolfathi, N; Moffatt, R A; Moore, D C; Nelson, R H; Oser, S M; Page, K; Page, W A; Partridge, R; Pepin, M; Phipps, A; Prasad, K; Pyle, M; Qiu, H; Rau, W; Redl, P; Reisetter, A; Ricci, Y; Rogers, H E; Saab, T; Sadoulet, B; Sander, J; Schneck, K; Schnee, R W; Scorza, S; Serfass, B; Shank, B; Speller, D; Upadhyayula, S; Villano, A N; Welliver, B; Wright, D H; Yellin, S; Yen, J J; Young, B A; Zhang, J

    2014-01-01

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search (CDMS~II) experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from $^{210}$Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  3. Likelihood Analysis of the Minimal AMSB Model arXiv

    CERN Document Server

    Bagnaschi, E.; Sakurai, K.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; Costa, J.C.; De Roeck, A.; Dolan, M.J.; Ellis, J.R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Luo, F.; Martínez Santos, D.; Olive, K.A.; Richards, A.; Weiglein, G.

    We perform a likelihood analysis of the minimal Anomaly-Mediated Supersymmetry Breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that a wino-like or a Higgsino-like neutralino LSP, $m_{\\tilde \\chi^0_{1}}$, may provide the cold dark matter (DM) with similar likelihood. The upper limit on the DM density from Planck and other experiments enforces $m_{\\tilde \\chi^0_{1}} \\lesssim 3~TeV$ after the inclusion of Sommerfeld enhancement in its annihilations. If most of the cold DM density is provided by the $\\tilde \\chi_0^1$, the measured value of the Higgs mass favours a limited range of $\\tan \\beta \\sim 5$ (or for $\\mu > 0$, $\\tan \\beta \\sim 45$) but the scalar mass $m_0$ is poorly constrained. In the wino-LSP case, $m_{3/2}$ is constrained to about $900~TeV$ and ${m_{\\tilde \\chi^0_{1}}}$ to $2.9\\pm0.1~TeV$, whereas in the Higgsino-LSP case $m_{3/2}$ has just a lower limit $\\gtrsim 650TeV$ ($\\gtrsim 480TeV$) and $m_{\\tilde \\chi^0_{1}}$ is constrained to $1.12 ~(1.13) \\pm0.02...

  4. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  5. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Science.gov (United States)

    Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

    2013-01-01

    Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  6. Maximum-likelihood analysis of the COBE angular correlation function

    Science.gov (United States)

    Seljak, Uros; Bertschinger, Edmund

    1993-01-01

    We have used maximum-likelihood estimation to determine the quadrupole amplitude Q(sub rms-PS) and the spectral index n of the density fluctuation power spectrum at recombination from the COBE DMR data. We find a strong correlation between the two parameters of the form Q(sub rms-PS) = (15.7 +/- 2.6) exp (0.46(1 - n)) microK for fixed n. Our result is slightly smaller than and has a smaller statistical uncertainty than the 1992 estimate of Smoot et al.

  7. Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models

    NARCIS (Netherlands)

    J.J.J. Groen (Jan); F.R. Kleibergen (Frank)

    1999-01-01

    textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these

  8. Conditional likelihood inference in a case- cohort design: an application to haplotype analysis.

    Science.gov (United States)

    Saarela, Olli; Kulathinal, Sangita

    2007-01-01

    Under the setting of a case-cohort design, covariate values are ascertained for a smaller subgroup of the original study cohort which typically is a representative sample from a population. Individuals with a specific event outcome are selected to the second stage study group as cases and an additional subsample is selected to act as a control group. We carry out analysis of such a design using conditional likelihood where the likelihood expression is conditioned on the ascertainment to the second stage study group. Such likelihood expression involves the probability of ascertainment which need to be expressed in terms of the model parameters. We present examples of conditional likelihoods for models for categorical response and time-to-event response. We show that the conditional likelihood inference leads to valid estimation of population parameters. Our application considers joint estimation of haplotype-event association parameters and population haplotype frequencies based on SNP genotype data collected under a case-cohort design.

  9. Likelihood analysis of the Local Group acceleration revisited

    CERN Document Server

    Ciecielag, P

    2004-01-01

    We reexamine likelihood analyzes of the Local Group (LG) acceleration, paying particular attention to nonlinear effects. Under the approximation that the joint distribution of the LG acceleration and velocity is Gaussian, two quantities describing nonlinear effects enter these analyzes. The first one is the coherence function, i.e. the cross-correlation coefficient of the Fourier modes of gravity and velocity fields. The second one is the ratio of velocity power spectrum to gravity power spectrum. To date, in all analyzes of the LG acceleration the second quantity was not accounted for. Extending our previous work, we study both the coherence function and the ratio of the power spectra. With the aid of numerical simulations we obtain expressions for the two as functions of wavevector and \\sigma_8. Adopting WMAP's best determination of \\sigma_8, we estimate the most likely value of the parameter \\beta and its errors. As the observed values of the LG velocity and gravity, we adopt respectively a CMB-based estim...

  10. San Carlos Apache Tribe - Energy Organizational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  11. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  12. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  13. Vectorized Monte Carlo methods for reactor lattice analysis

    Science.gov (United States)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  14. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    Science.gov (United States)

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  15. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations

    Science.gov (United States)

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…

  16. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations

    NARCIS (Netherlands)

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influ

  17. A gradient Markov chain Monte Carlo algorithm for computing multivariate maximum likelihood estimates and posterior distributions: mixture dose-response assessment.

    Science.gov (United States)

    Li, Ruochen; Englehardt, James D; Li, Xiaoguang

    2012-02-01

    Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided.

  18. A penalized likelihood approach for bivariate conditional normal models for dynamic co-expression analysis.

    Science.gov (United States)

    Chen, Jun; Xie, Jichun; Li, Hongzhe

    2011-03-01

    Gene co-expressions have been widely used in the analysis of microarray gene expression data. However, the co-expression patterns between two genes can be mediated by cellular states, as reflected by expression of other genes, single nucleotide polymorphisms, and activity of protein kinases. In this article, we introduce a bivariate conditional normal model for identifying the variables that can mediate the co-expression patterns between two genes. Based on this model, we introduce a likelihood ratio (LR) test and a penalized likelihood procedure for identifying the mediators that affect gene co-expression patterns. We propose an efficient computational algorithm based on iterative reweighted least squares and cyclic coordinate descent and have shown that when the tuning parameter in the penalized likelihood is appropriately selected, such a procedure has the oracle property in selecting the variables. We present simulation results to compare with existing methods and show that the LR-based approach can perform similarly or better than the existing method of liquid association and the penalized likelihood procedure can be quite effective in selecting the mediators. We apply the proposed method to yeast gene expression data in order to identify the kinases or single nucleotide polymorphisms that mediate the co-expression patterns between genes.

  19. A conditional likelihood approach for regression analysis using biomarkers measured with batch-specific error.

    Science.gov (United States)

    Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi

    2012-12-20

    Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.

  20. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  1. Bayesian phylogeny analysis via stochastic approximation Monte Carlo.

    Science.gov (United States)

    Cheon, Sooyoung; Liang, Faming

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  2. The R Package metaLik for Likelihood Inference in Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Annamaria Guolo

    2012-08-01

    Full Text Available Meta-analysis is a statistical method for combining information from different studies about the same issue of interest. Meta-analysis is widely diffuse in medical investigation and more recently it received a growing interest also in social disciplines. Typical applications involve a small number of studies, thus making ordinary inferential methods based on first-order asymptotics unreliable. More accurate results can be obtained by exploiting the theory of higher-order asymptotics. This paper describes the metaLik package which provides an R implementation of higher-order likelihood methods in meta-analysis. The extension to meta-regression is included. Two real data examples are used to illustrate the capabilities of the package.

  3. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    Science.gov (United States)

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience.

  4. Likelihood-based association analysis for nuclear families and unrelated subjects with missing genotype data.

    Science.gov (United States)

    Dudbridge, Frank

    2008-01-01

    Missing data occur in genetic association studies for several reasons including missing family members and uncertain haplotype phase. Maximum likelihood is a commonly used approach to accommodate missing data, but it can be difficult to apply to family-based association studies, because of possible loss of robustness to confounding by population stratification. Here a novel likelihood for nuclear families is proposed, in which distinct sets of association parameters are used to model the parental genotypes and the offspring genotypes. This approach is robust to population structure when the data are complete, and has only minor loss of robustness when there are missing data. It also allows a novel conditioning step that gives valid analysis for multiple offspring in the presence of linkage. Unrelated subjects are included by regarding them as the children of two missing parents. Simulations and theory indicate similar operating characteristics to TRANSMIT, but with no bias with missing data in the presence of linkage. In comparison with FBAT and PCPH, the proposed model is slightly less robust to population structure but has greater power to detect strong effects. In comparison to APL and MITDT, the model is more robust to stratification and can accommodate sibships of any size. The methods are implemented for binary and continuous traits in software, UNPHASED, available from the author.

  5. Benefits of maximum likelihood estimators for fracture attribute analysis: Implications for permeability and up-scaling

    Science.gov (United States)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2017-02-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in rocks, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture lengths and apertures are fundamental to estimate bulk permeability and therefore fluid flow, especially for rocks with low primary porosity where most of the flow takes place within fractures. We collected outcrop data from a fractured upper Miocene biosiliceous mudstone formation (California, USA), which exhibits seepage of bitumen-rich fluids through the fractures. The dataset was analysed using Maximum Likelihood Estimators to extract the underlying scaling parameters, and we found a log-normal distribution to be the best representative statistic for both fracture lengths and apertures in the study area. By applying Maximum Likelihood Estimators on outcrop fracture data, we generate fracture network models with the same statistical attributes to the ones observed on outcrop, from which we can achieve more robust predictions of bulk permeability.

  6. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  7. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  8. Likelihood based observability analysis and confidence intervals for predictions of dynamic models

    CERN Document Server

    Kreutz, Clemens; Timmer, Jens

    2011-01-01

    Mechanistic dynamic models of biochemical networks such as Ordinary Differential Equations (ODEs) contain unknown parameters like the reaction rate constants and the initial concentrations of the compounds. The large number of parameters as well as their nonlinear impact on the model responses hamper the determination of confidence regions for parameter estimates. At the same time, classical approaches translating the uncertainty of the parameters into confidence intervals for model predictions are hardly feasible. In this article it is shown that a so-called prediction profile likelihood yields reliable confidence intervals for model predictions, despite arbitrarily complex and high-dimensional shapes of the confidence regions for the estimated parameters. Prediction confidence intervals of the dynamic states allow a data-based observability analysis. The approach renders the issue of sampling a high-dimensional parameter space into evaluating one-dimensional prediction spaces. The method is also applicable ...

  9. A Monte Carlo template-based analysis for very high definition imaging atmospheric Cherenkov telescopes as applied to the VERITAS telescope array

    CERN Document Server

    ,

    2015-01-01

    We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.

  10. Likelihood-Based Association Analysis for Nuclear Families and Unrelated Subjects with Missing Genotype Data

    OpenAIRE

    Dudbridge, Frank

    2008-01-01

    Missing data occur in genetic association studies for several reasons including missing family members and uncertain haplotype phase. Maximum likelihood is a commonly used approach to accommodate missing data, but it can be difficult to apply to family-based association studies, because of possible loss of robustness to confounding by population stratification. Here a novel likelihood for nuclear families is proposed, in which distinct sets of association parameters are used to model the pare...

  11. Statistical analysis of the Lognormal-Pareto distribution using Probability Weighted Moments and Maximum Likelihood

    OpenAIRE

    Marco Bee

    2012-01-01

    This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...

  12. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    Science.gov (United States)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  13. Gray matter atrophy in narcolepsy: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Weng, Hsu-Huei; Chen, Chih-Feng; Tsai, Yuan-Hsiung; Wu, Chih-Ying; Lee, Meng; Lin, Yu-Ching; Yang, Cheng-Ta; Tsai, Ying-Huang; Yang, Chun-Yuh

    2015-12-01

    The authors reviewed the literature on the use of voxel-based morphometry (VBM) in narcolepsy magnetic resonance imaging (MRI) studies via the use of a meta-analysis of neuroimaging to identify concordant and specific structural deficits in patients with narcolepsy as compared with healthy subjects. We used PubMed to retrieve articles published between January 2000 and March 2014. The authors included all VBM research on narcolepsy and compared the findings of the studies by using gray matter volume (GMV) or gray matter concentration (GMC) to index differences in gray matter. Stereotactic data were extracted from 8 VBM studies of 149 narcoleptic patients and 162 control subjects. We applied activation likelihood estimation (ALE) technique and found significant regional gray matter reduction in the bilateral hypothalamus, thalamus, globus pallidus, extending to nucleus accumbens (NAcc) and anterior cingulate cortex (ACC), left mid orbital and rectal gyri (BAs 10 and 11), right inferior frontal gyrus (BA 47), and the right superior temporal gyrus (BA 41) in patients with narcolepsy. The significant gray matter deficits in narcoleptic patients occurred in the bilateral hypothalamus and frontotemporal regions, which may be related to the emotional processing abnormalities and orexin/hypocretin pathway common among populations of patients with narcolepsy.

  14. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Science.gov (United States)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  15. The impact of Monte Carlo simulation: a scientometric analysis of scholarly literature

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2010-01-01

    A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications.

  16. Altered sensorimotor activation patterns in idiopathic dystonia-an activation likelihood estimation meta-analysis of functional brain imaging studies

    DEFF Research Database (Denmark)

    Løkkegaard, Annemette; Herz, Damian M; Haagensen, Brian N;

    2016-01-01

    . Further, study size was usually small including different types of dystonia. Here we performed an activation likelihood estimation (ALE) meta-analysis of functional neuroimaging studies in patients with primary dystonia to test for convergence of dystonia-related alterations in task-related activity....... Hum Brain Mapp 37:547-557, 2016. © 2015 Wiley Periodicals, Inc....

  17. Eigenvalue analysis using a full-core Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Okafor, K.C.; Zino, J.F. (Westinghouse Savannah River Co., Aiken, SC (United States))

    1992-01-01

    The reactor physics codes used at the Savannah River Site (SRS) to predict reactor behavior have been continually benchmarked against experimental and operational data. A particular benchmark variable is the observed initial critical control rod position. Historically, there has been some difficulty predicting this position because of the difficulties inherent in using computer codes to model experimental or operational data. The Monte Carlo method is applied in this paper to study the initial critical control rod positions for the SRS K Reactor. A three-dimensional, full-core MCNP model of the reactor was developed for this analysis.

  18. Anatomical likelihood estimation meta-analysis of grey and white matter anomalies in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Thomas P. DeRamus

    2015-01-01

    Full Text Available Autism spectrum disorders (ASD are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE analysis of 21 voxel-based morphometry (VBM studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals. Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior connections. Our findings of decreased cortical matter in parietal–temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations.

  19. Anatomical likelihood estimation meta-analysis of grey and white matter anomalies in autism spectrum disorders.

    Science.gov (United States)

    DeRamus, Thomas P; Kana, Rajesh K

    2015-01-01

    Autism spectrum disorders (ASD) are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE) analysis of 21 voxel-based morphometry (VBM) studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals). Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior) connections. Our findings of decreased cortical matter in parietal-temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations.

  20. Criticality accident detector coverage analysis using the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Zino, J.F.; Okafor, K.C.

    1993-12-31

    As a result of the need for a more accurate computational methodology, the Los Alamos developed Monte Carlo code MCNP is used to show the implementation of a more advanced and accurate methodology in criticality accident detector analysis. This paper will detail the application of MCNP for the analysis of the areas of coverage of a criticality accident alarm detector located inside a concrete storage vault at the Savannah River Site. The paper will discuss; (1) the generation of fixed-source representations of various criticality fission sources (for spherical geometries); (2) the normalization of these sources to the ``minimum criticality of concern`` as defined by ANS 8.3; (3) the optimization process used to determine which source produces the lowest total detector response for a given set of conditions; and (4) the use of this minimum source for the analysis of the areas of coverage of the criticality accident alarm detector.

  1. Neural networks involved in adolescent reward processing: An activation likelihood estimation meta-analysis of functional neuroimaging studies.

    Science.gov (United States)

    Silverman, Merav H; Jedd, Kelly; Luciana, Monica

    2015-11-15

    Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: (1) confirm the network of brain regions involved in adolescents' reward processing, (2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and (3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence.

  2. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    Science.gov (United States)

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  3. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    Science.gov (United States)

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  4. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  5. Conditional likelihood methods for haplotype-based association analysis using matched case-control data.

    Science.gov (United States)

    Chen, Jinbo; Rodriguez, Carmen

    2007-12-01

    Genetic epidemiologists routinely assess disease susceptibility in relation to haplotypes, that is, combinations of alleles on a single chromosome. We study statistical methods for inferring haplotype-related disease risk using single nucleotide polymorphism (SNP) genotype data from matched case-control studies, where controls are individually matched to cases on some selected factors. Assuming a logistic regression model for haplotype-disease association, we propose two conditional likelihood approaches that address the issue that haplotypes cannot be inferred with certainty from SNP genotype data (phase ambiguity). One approach is based on the likelihood of disease status conditioned on the total number of cases, genotypes, and other covariates within each matching stratum, and the other is based on the joint likelihood of disease status and genotypes conditioned only on the total number of cases and other covariates. The joint-likelihood approach is generally more efficient, particularly for assessing haplotype-environment interactions. Simulation studies demonstrated that the first approach was more robust to model assumptions on the diplotype distribution conditioned on environmental risk variables and matching factors in the control population. We applied the two methods to analyze a matched case-control study of prostate cancer.

  6. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Mohamed, A.

    1998-07-10

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results.

  7. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations

    DEFF Research Database (Denmark)

    Kamran, Faisal; Andersen, Peter E.

    2015-01-01

    Oblique incidence reflectometry has developed into an effective, noncontact, and noninvasive measurement technology for the quantification of both the reduced scattering and absorption coefficients of a sample. The optical properties are deduced by analyzing only the shape of the reflectance...... profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...... properties in which system demands vary to be able to detect subtle changes in the structure of the medium, translated as measured optical properties. Effects of variation in anisotropy are discussed and results presented. Finally, experimental data of milk products with different fat content are considered...

  8. Empirical Likelihood Analysis of Longitudinal Data Involving Within-subject Correlation

    Institute of Scientific and Technical Information of China (English)

    Shuang HU; Lu LIN

    2012-01-01

    In this paper we use profile empirical likelihood to construct confidence regions for regression coefficients in partially linear model with longitudinal data.The main contribution is that the within-subject correlation is considered to improve estimation efficiency. We suppose a semi-parametric structure for the covariances of observation errors in each subject and employ both the first order and the second order moment conditions of the observation errors to construct the estimating equations.Although there are nonparametric estimators,the empirical log-likelihood ratio statistic still tends to a standard xp2 variable in distribution after the nuisance parameters are profiled away.A data simulation is also conducted.

  9. Recursive Pathways to Marginal Likelihood Estimation with Prior-Sensitivity Analysis

    CERN Document Server

    Cameron, Ewan

    2013-01-01

    We investigate the utility to contemporary Bayesian studies of recursive, Gauss-Seidel-type pathways to marginal likelihood estimation characterized by reverse logistic regression and the density of states. Through a pair of illustrative, numerical examples (including mixture modeling of the well-known 'galaxy dataset') we highlight both the remarkable diversity of bridging schemes amenable to recursive normalization and the notable efficiency of the resulting pseudo-mixture densities for gauging prior-sensitivity in the model selection context. Our key theoretical contributions show the connection between the nested sampling identity and the density of states. Further, we introduce a novel heuristic ('thermodynamic integration via importance sampling') for qualifying the role of the bridging sequence in marginal likelihood estimation. An efficient pseudo-mixture density scheme for harnessing the information content of otherwise discarded draws in ellipse-based nested sampling is also introduced.

  10. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    KAUST Repository

    Hoel, Hakon

    2014-01-01

    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  11. Monte Carlo analysis of radiative transport in oceanographic lidar measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale

    2001-07-01

    The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is

  12. Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    Science.gov (United States)

    LIU, Liang; WEI, Ping; LIAO, Hong Shu

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  13. Elaboration Likelihood Model and an Analysis of the Contexts of Its Application

    Directory of Open Access Journals (Sweden)

    Aslıhan Kıymalıoğlu

    2014-12-01

    Full Text Available Elaboration Likelihood Model (ELM, which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept most frequently used in elaboration process was involvement, and that argument quality and endorser credibility were the factors most often employed in measuring their effect on the dependant variables. The review provides valuable insights as it presents a holistic view of the model and the variables used in the model.

  14. Regression analysis based on conditional likelihood approach under semi-competing risks data.

    Science.gov (United States)

    Hsieh, Jin-Jian; Huang, Yu-Ting

    2012-07-01

    Medical studies often involve semi-competing risks data, which consist of two types of events, namely terminal event and non-terminal event. Because the non-terminal event may be dependently censored by the terminal event, it is not possible to make inference on the non-terminal event without extra assumptions. Therefore, this study assumes that the dependence structure on the non-terminal event and the terminal event follows a copula model, and lets the marginal regression models of the non-terminal event and the terminal event both follow time-varying effect models. This study uses a conditional likelihood approach to estimate the time-varying coefficient of the non-terminal event, and proves the large sample properties of the proposed estimator. Simulation studies show that the proposed estimator performs well. This study also uses the proposed method to analyze AIDS Clinical Trial Group (ACTG 320).

  15. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  16. Monte Carlo Alpha Iteration Algorithm for a Subcritical System Analysis

    Directory of Open Access Journals (Sweden)

    Hyung Jin Shim

    2015-01-01

    Full Text Available The α-k iteration method which searches the fundamental mode alpha-eigenvalue via iterative updates of the fission source distribution has been successfully used for the Monte Carlo (MC alpha-static calculations of supercritical systems. However, the α-k iteration method for the deep subcritical system analysis suffers from a gigantic number of neutron generations or a huge neutron weight, which leads to an abnormal termination of the MC calculations. In order to stably estimate the prompt neutron decay constant (α of prompt subcritical systems regardless of subcriticality, we propose a new MC alpha-static calculation method named as the α iteration algorithm. The new method is derived by directly applying the power method for the α-mode eigenvalue equation and its calculation stability is achieved by controlling the number of time source neutrons which are generated in proportion to α divided by neutron speed in MC neutron transport simulations. The effectiveness of the α iteration algorithm is demonstrated for two-group homogeneous problems with varying the subcriticality by comparisons with analytic solutions. The applicability of the proposed method is evaluated for an experimental benchmark of the thorium-loaded accelerator-driven system.

  17. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    Science.gov (United States)

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented γ-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum

  18. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    Energy Technology Data Exchange (ETDEWEB)

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  19. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    Directory of Open Access Journals (Sweden)

    Leandro de Jesus Benevides

    Full Text Available Abstract Apolipoprotein E (apo E is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL and a group of high-density lipoproteins (HDL. Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML, and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1 and another with fish (C2, and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups.

  20. A novel image reconstruction methodology based on inverse Monte Carlo analysis for positron emission tomography

    Science.gov (United States)

    Kudrolli, Haris A.

    2001-04-01

    A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates

  1. An empirical likelihood ratio test robust to individual heterogeneity for differential expression analysis of RNA-seq.

    Science.gov (United States)

    Xu, Maoqi; Chen, Liang

    2016-10-21

    The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease.

  2. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  3. Accuracy Analysis of Assembly Success Rate with Monte Carlo Simulations

    Institute of Scientific and Technical Information of China (English)

    仲昕; 杨汝清; 周兵

    2003-01-01

    Monte Carlo simulation was applied to Assembly Success Rate (ASR) analyses.ASR of two peg-in-hole robot assemblies was used as an example by taking component parts' sizes,manufacturing tolerances and robot repeatability into account.A statistic arithmetic expression was proposed and deduced in this paper,which offers an alternative method of estimating the accuracy of ASR,without having to repeat the simulations.This statistic method also helps to choose a suitable sample size,if error reduction is desired.Monte Carlo simulation results demonstrated the feasibility of the method.

  4. Likelihood analysis of species occurrence probability from presence-only data for modelling species distributions

    Science.gov (United States)

    Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.

    2012-01-01

    1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities

  5. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-02-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  6. Algebraic Monte Carlo precedure reduces statistical analysis time and cost factors

    Science.gov (United States)

    Africano, R. C.; Logsdon, T. S.

    1967-01-01

    Algebraic Monte Carlo procedure statistically analyzes performance parameters in large, complex systems. The individual effects of input variables can be isolated and individual input statistics can be changed without having to repeat the entire analysis.

  7. Volume and Direction of the Atlantic Slave Trade, 1650-1870: Estimates by Markov Chain Carlo Analysis

    Directory of Open Access Journals (Sweden)

    Patrick Manning

    2016-03-01

    Full Text Available This article presents methods and results in the application of the Markov Chain Monte Carlo (MCMC analysis to a problem in missing data. The data used here are from The Atlantic Slave Trade Database (TASTD, 2010 version, available online. Of the currently known 35,000 slaving voyages, original data on the size of the cargo of captives exist for some 25 percent of voyage embarkations in Africa and for about 50 percent of arrivals in the Americas. Previous efforts to estimate the missing data (and project the to- tal number of captives who made the transatlantic migration have proceeded through eclectic projections of maximum likelihood estimates of captives per voyage, without error margins. This paper creates new estimates of total mi- grant flow through two methods: one is a formally frequentist set of multiple methods, and the other is through Markov Chain Monte Carlo methodology. Comparison of the three methods, all based on the same raw data, show that the results of the two new methods are fairly close to one another and they yield total flows of migrant captives of more than 20 percent higher than the previous estimates. Quantitative results, presented in simplified graphs and tables within the text and in detailed spreadsheets available online, provide a new estimate of the volume of African embarkations and American arrivals in the transatlantic slave trade for the period from 1650 to 1870, by decade, for eleven African regions of embarkation and seven American and European regions of arrival.

  8. Fast inference in generalized linear models via expected log-likelihoods.

    Science.gov (United States)

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  9. Improved analysis of bias in Monte Carlo criticality safety

    Science.gov (United States)

    Haley, Thomas C.

    2000-08-01

    Criticality safety, the prevention of nuclear chain reactions, depends on Monte Carlo computer codes for most commercial applications. One major shortcoming of these codes is the limited accuracy of the atomic and nuclear data files they depend on. In order to apply a code and its data files to a given criticality safety problem, the code must first be benchmarked against similar problems for which the answer is known. The difference between a code prediction and the known solution is termed the "bias" of the code. Traditional calculations of the bias for application to commercial criticality problems are generally full of assumptions and lead to large uncertainties which must be conservatively factored into the bias as statistical tolerances. Recent trends in storing commercial nuclear fuel---narrowed regulatory margins of safety, degradation of neutron absorbers, the desire to use higher enrichment fuel, etc.---push the envelope of criticality safety. They make it desirable to minimize uncertainty in the bias to accommodate these changes, and they make it vital to understand what assumptions are safe to make under what conditions. A set of improved procedures is proposed for (1) developing multivariate regression bias models, and (2) applying multivariate regression bias models. These improved procedures lead to more accurate estimates of the bias and much smaller uncertainties about this estimate, while also generally providing more conservative results. The drawback is that the procedures are not trivial and are highly labor intensive to implement. The payback in savings in margin to criticality and conservatism for calculations near regulatory and safety limits may be worth this cost. To develop these procedures, a bias model using the statistical technique of weighted least squares multivariate regression is developed in detail. Problems that can occur from a weak statistical analysis are highlighted, and a solid statistical method for developing the bias

  10. Design optimization of the proximity focusing RICH with dual aerogel radiator using a maximum-likelihood analysis of Cherenkov rings

    Science.gov (United States)

    Pestotnik, R.; Križan, P.; Korpar, S.; Iijima, T.

    2008-09-01

    The use of a sequence of aerogel radiators with different refractive indices in a proximity focusing Cherenkov ring imaging detector has been shown to improve the resolution of the Cherenkov angle. In order to obtain further information on the capabilities of such a detector, a maximum-likelihood analysis has been performed on simulated data, with the simulation being appropriate for the upgraded Belle detector. The results show that by using a sequence of two aerogel layers with different refractive indices, the K/π separation efficiency is improved in the kinematic region above 3 GeV/ c. In the low momentum region, the focusing configuration (with n1 and n2 chosen such that the Cherenkov rings from different aerogel layers at 4 GeV/ c overlap) shows a better performance than the defocusing one (where the two Cherenkov rings are well separated).

  11. A Clustered Multiclass Likelihood-Ratio Ensemble Method for Family-Based Association Analysis Accounting for Phenotypic Heterogeneity.

    Science.gov (United States)

    Wen, Yalu; Lu, Qing

    2016-09-01

    Although compelling evidence suggests that the genetic etiology of complex diseases could be heterogeneous in subphenotype groups, little attention has been paid to phenotypic heterogeneity in genetic association analysis of complex diseases. Simply ignoring phenotypic heterogeneity in association analysis could result in attenuated estimates of genetic effects and low power of association tests if subphenotypes with similar clinical manifestations have heterogeneous underlying genetic etiologies. To facilitate the family-based association analysis allowing for phenotypic heterogeneity, we propose a clustered multiclass likelihood-ratio ensemble (CMLRE) method. The proposed method provides an alternative way to model the complex relationship between disease outcomes and genetic variants. It allows for heterogeneous genetic causes of disease subphenotypes and can be applied to various pedigree structures. Through simulations, we found CMLRE outperformed the commonly adopted strategies in a variety of underlying disease scenarios. We further applied CMLRE to a family-based dataset from the International Consortium to Identify Genes and Interactions Controlling Oral Clefts (ICOC) to investigate the genetic variants and interactions predisposing to subphenotypes of oral clefts. The analysis suggested that two subphenotypes, nonsyndromic cleft lip without palate (CL) and cleft lip with palate (CLP), shared similar genetic etiologies, while cleft palate only (CP) had its own genetic mechanism. The analysis further revealed that rs10863790 (IRF6), rs7017252 (8q24), and rs7078160 (VAX1) were jointly associated with CL/CLP, while rs7969932 (TBK1), rs227731 (17q22), and rs2141765 (TBK1) jointly contributed to CP.

  12. Analysis of multiple exposures in the case-crossover design via sparse conditional likelihood.

    Science.gov (United States)

    Avalos, Marta; Grandvalet, Yves; Adroher, Nuria Duran; Orriols, Ludivine; Lagarde, Emmanuel

    2012-09-20

    We adapt the least absolute shrinkage and selection operator (lasso) and other sparse methods (elastic net and bootstrapped versions of lasso) to the conditional logistic regression model and provide a full R implementation. These variable selection procedures are applied in the context of case-crossover studies. We study the performances of conventional and sparse modelling strategies by simulations, then empirically compare results of these methods on the analysis of the association between exposure to medicinal drugs and the risk of causing an injurious road traffic crash in elderly drivers. Controlling the false discovery rate of lasso-type methods is still problematic, but this problem is also present in conventional methods. The sparse methods have the ability to provide a global analysis of dependencies, and we conclude that some of the variants compared here are valuable tools in the context of case-crossover studies with a large number of variables.

  13. Event-related fMRI studies of false memory: An Activation Likelihood Estimation meta-analysis.

    Science.gov (United States)

    Kurkela, Kyle A; Dennis, Nancy A

    2016-01-29

    Over the last two decades, a wealth of research in the domain of episodic memory has focused on understanding the neural correlates mediating false memories, or memories for events that never happened. While several recent qualitative reviews have attempted to synthesize this literature, methodological differences amongst the empirical studies and a focus on only a sub-set of the findings has limited broader conclusions regarding the neural mechanisms underlying false memories. The current study performed a voxel-wise quantitative meta-analysis using activation likelihood estimation to investigate commonalities within the functional magnetic resonance imaging (fMRI) literature studying false memory. The results were broken down by memory phase (encoding, retrieval), as well as sub-analyses looking at differences in baseline (hit, correct rejection), memoranda (verbal, semantic), and experimental paradigm (e.g., semantic relatedness and perceptual relatedness) within retrieval. Concordance maps identified significant overlap across studies for each analysis. Several regions were identified in the general false retrieval analysis as well as multiple sub-analyses, indicating their ubiquitous, yet critical role in false retrieval (medial superior frontal gyrus, left precentral gyrus, left inferior parietal cortex). Additionally, several regions showed baseline- and paradigm-specific effects (hit/perceptual relatedness: inferior and middle occipital gyrus; CRs: bilateral inferior parietal cortex, precuneus, left caudate). With respect to encoding, analyses showed common activity in the left middle temporal gyrus and anterior cingulate cortex. No analysis identified a common cluster of activation in the medial temporal lobe.

  14. Further Evaluation of Covariate Analysis using Empirical Bayes Estimates in Population Pharmacokinetics: the Perception of Shrinkage and Likelihood Ratio Test.

    Science.gov (United States)

    Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose

    2017-01-01

    Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.

  15. Complex DNA mixture analysis in a forensic context: evaluating the probative value using a likelihood ratio model.

    Science.gov (United States)

    Haned, Hinda; Benschop, Corina C G; Gill, Peter D; Sijen, Titia

    2015-05-01

    The interpretation of mixed DNA profiles obtained from low template DNA samples has proven to be a particularly difficult task in forensic casework. Newly developed likelihood ratio (LR) models that account for PCR-related stochastic effects, such as allelic drop-out, drop-in and stutters, have enabled the analysis of complex cases that would otherwise have been reported as inconclusive. In such samples, there are uncertainties about the number of contributors, and the correct sets of propositions to consider. Using experimental samples, where the genotypes of the donors are known, we evaluated the feasibility and the relevance of the interpretation of high order mixtures, of three, four and five donors. The relative risks of analyzing high order mixtures of three, four, and five donors, were established by comparison of a 'gold standard' LR, to the LR that would be obtained in casework. The 'gold standard' LR is the ideal LR: since the genotypes and number of contributors are known, it follows that the parameters needed to compute the LR can be determined per contributor. The 'casework LR' was calculated as used in standard practice, where unknown donors are assumed; the parameters were estimated from the available data. Both LRs were calculated using the basic standard model, also termed the drop-out/drop-in model, implemented in the LRmix module of the R package Forensim. We show how our results furthered the understanding of the relevance of analyzing high order mixtures in a forensic context. Limitations are highlighted, and it is illustrated how our study serves as a guide to implement likelihood ratio interpretation of complex DNA profiles in forensic casework.

  16. Gauge Potts model with generalized action: A Monte Carlo analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fanchiotti, H.; Canal, C.A.G.; Sciutto, S.J.

    1985-08-15

    Results of a Monte Carlo calculation on the q-state gauge Potts model in d dimensions with a generalized action involving planar 1 x 1, plaquette, and 2 x 1, fenetre, loop interactions are reported. For d = 3 and q = 2, first- and second-order phase transitions are detected. The phase diagram for q = 3 presents only first-order phase transitions. For d = 2, a comparison with analytical results is made. Here also, the behavior of the numerical simulation in the vicinity of a second-order transition is analyzed.

  17. Monte Carlo Criticality Methods and Analysis Capabilities in SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Goluoglu, Sedat [ORNL; Petrie Jr, Lester M [ORNL; Dunn, Michael E [ORNL; Hollenbach, Daniel F [ORNL; Rearden, Bradley T [ORNL

    2011-01-01

    This paper describes the Monte Carlo codes KENO V.a and KENO-VI in SCALE that are primarily used to calculate multiplication factors and flux distributions of fissile systems. Both codes allow explicit geometric representation of the target systems and are used internationally for safety analyses involving fissile materials. KENO V.a has limiting geometric rules such as no intersections and no rotations. These limitations make KENO V.a execute very efficiently and run very fast. On the other hand, KENO-VI allows very complex geometric modeling. Both KENO codes can utilize either continuous-energy or multigroup cross-section data and have been thoroughly verified and validated with ENDF libraries through ENDF/B-VII.0, which has been first distributed with SCALE 6. Development of the Monte Carlo solution technique and solution methodology as applied in both KENO codes is explained in this paper. Available options and proper application of the options and techniques are also discussed. Finally, performance of the codes is demonstrated using published benchmark problems.

  18. Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Chase, Henry W; Kumar, Poornima; Eickhoff, Simon B; Dombrovski, Alexandre Y

    2015-06-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments-prediction error-is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies have suggested that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that had employed algorithmic reinforcement learning models across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, whereas instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies.

  19. Efficient strategies for genome scanning using maximum-likelihood affected-sib-pair analysis

    Energy Technology Data Exchange (ETDEWEB)

    Holmans, P.; Craddock, N. [Univ. of Wales College of Medicine, Cardiff (United Kingdom)

    1997-03-01

    Detection of linkage with a systematic genome scan in nuclear families including an affected sibling pair is an important initial step on the path to cloning susceptibility genes for complex genetic disorders, and it is desirable to optimize the efficiency of such studies. The aim is to maximize power while simultaneously minimizing the total number of genotypings and probability of type I error. One approach to increase efficiency, which has been investigated by other workers, is grid tightening: a sample is initially typed using a coarse grid of markers, and promising results are followed up by use of a finer grid. Another approach, not previously considered in detail in the context of an affected-sib-pair genome scan for linkage, is sample splitting: a portion of the sample is typed in the screening stage, and promising results are followed up in the whole sample. In the current study, we have used computer simulation to investigate the relative efficiency of two-stage strategies involving combinations of both grid tightening and sample splitting and found that the optimal strategy incorporates both approaches. In general, typing half the sample of affected pairs with a coarse grid of markers in the screening stage is an efficient strategy under a variety of conditions. If Hardy-Weinberg equilibrium holds, it is most efficient not to type parents in the screening stage. If Hardy-Weinberg equilibrium does not hold (e.g., because of stratification) failure to type parents in the first stage increases the amount of genotyping required, although the overall probability of type I error is not greatly increased, provided the parents are used in the final analysis. 23 refs., 4 figs., 5 tabs.

  20. Analytic Methods for Cosmological Likelihoods

    OpenAIRE

    Taylor, A. N.; Kitching, T. D.

    2010-01-01

    We present general, analytic methods for Cosmological likelihood analysis and solve the "many-parameters" problem in Cosmology. Maxima are found by Newton's Method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic...

  1. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...

  2. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    Science.gov (United States)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  3. Cluster Analysis as a Method of Recovering Types of Intraindividual Growth Trajectories: A Monte Carlo Study.

    Science.gov (United States)

    Dumenci, Levent; Windle, Michael

    2001-01-01

    Used Monte Carlo methods to evaluate the adequacy of cluster analysis to recover group membership based on simulated latent growth curve (LCG) models. Cluster analysis failed to recover growth subtypes adequately when the difference between growth curves was shape only. Discusses circumstances under which it was more successful. (SLD)

  4. Recent Developments in Real and Harmonic Analysis In Honor of Carlos Segovia

    CERN Document Server

    Cabrelli, Carlos A

    2008-01-01

    Featuring a collection of invited chapters dedicated to Carlos Segovia, this volume examines the developments in real and harmonic analysis. It includes topics such as: Vector-valued singular integral equations; Harmonic analysis related to Hermite expansions; Gas flow in porous media; and, Global well-posedness of the KPI Equation

  5. Monte Carlo simulations to advance characterisation of landmines by pulsed fast/thermal neutron analysis

    NARCIS (Netherlands)

    Maucec, M.; Rigollet, C.

    2004-01-01

    The performance of a detection system based on the pulsed fast/thermal neutron analysis technique was assessed using Monte Carlo simulations. The aim was to develop and implement simulation methods, to support and advance the data analysis techniques of the characteristic gamma-ray spectra, potentia

  6. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    Science.gov (United States)

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  7. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    is implemented using Markov Chain Monte Carlo methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap or by consideration of posterior distributions in a Bayesian setting. Maximum likelihood estimation and Bayesian inference is compared...

  8. Mapping grey matter reductions in schizophrenia: an anatomical likelihood estimation analysis of voxel-based morphometry studies.

    Science.gov (United States)

    Fornito, A; Yücel, M; Patti, J; Wood, S J; Pantelis, C

    2009-03-01

    Voxel-based morphometry (VBM) is a popular tool for mapping neuroanatomical changes in schizophrenia patients. Several recent meta-analyses have identified the brain regions in which patients most consistently show grey matter reductions, although they have not examined whether such changes reflect differences in grey matter concentration (GMC) or grey matter volume (GMV). These measures assess different aspects of grey matter integrity, and may therefore reflect different pathological processes. In this study, we used the Anatomical Likelihood Estimation procedure to analyse significant differences reported in 37 VBM studies of schizophrenia patients, incorporating data from 1646 patients and 1690 controls, and compared the findings of studies using either GMC or GMV to index grey matter differences. Analysis of all studies combined indicated that grey matter reductions in a network of frontal, temporal, thalamic and striatal regions are among the most frequently reported in literature. GMC reductions were generally larger and more consistent than GMV reductions, and were more frequent in the insula, medial prefrontal, medial temporal and striatal regions. GMV reductions were more frequent in dorso-medial frontal cortex, and lateral and orbital frontal areas. These findings support the primacy of frontal, limbic, and subcortical dysfunction in the pathophysiology of schizophrenia, and suggest that the grey matter changes observed with MRI may not necessarily result from a unitary pathological process.

  9. Functional brain networks in healthy subjects under acupuncture stimulation: An EEG study based on nonlinear synchronization likelihood analysis

    Science.gov (United States)

    Yu, Haitao; Liu, Jing; Cai, Lihui; Wang, Jiang; Cao, Yibin; Hao, Chongqing

    2017-02-01

    Electroencephalogram (EEG) signal evoked by acupuncture stimulation at "Zusanli" acupoint is analyzed to investigate the modulatory effect of manual acupuncture on the functional brain activity. Power spectral density of EEG signal is first calculated based on the autoregressive Burg method. It is shown that the EEG power is significantly increased during and after acupuncture in delta and theta bands, but decreased in alpha band. Furthermore, synchronization likelihood is used to estimate the nonlinear correlation between each pairwise EEG signals. By applying a threshold to resulting synchronization matrices, functional networks for each band are reconstructed and further quantitatively analyzed to study the impact of acupuncture on network structure. Graph theoretical analysis demonstrates that the functional connectivity of the brain undergoes obvious change under different conditions: pre-acupuncture, acupuncture, and post-acupuncture. The minimum path length is largely decreased and the clustering coefficient keeps increasing during and after acupuncture in delta and theta bands. It is indicated that acupuncture can significantly modulate the functional activity of the brain, and facilitate the information transmission within different brain areas. The obtained results may facilitate our understanding of the long-lasting effect of acupuncture on the brain function.

  10. Is there a critical lesion site for unilateral spatial neglect? A meta-analysis using activation likelihood estimation.

    Directory of Open Access Journals (Sweden)

    Pascal eMolenberghs

    2012-04-01

    Full Text Available The critical lesion site responsible for the syndrome of unilateral spatial neglect has been debated for more than a decade. Here we performed an activation likelihood estimation (ALE to provide for the first time an objective quantitative index of the consistency of lesion sites across anatomical group studies of spatial neglect. The analysis revealed several distinct regions in which damage has consistently been associated with spatial neglect symptoms. Lesioned clusters were located in several cortical and subcortical regions of the right hemisphere, including the middle and superior temporal gyrus, inferior parietal lobule, intraparietal sulcus, precuneus, middle occipital gyrus, caudate nucleus and posterior insula, as well as in the white matter pathway corresponding to the posterior part of the superior longitudinal fasciculus. Further analyses suggested that separate lesion sites are associated with impairments in different behavioural tests, such as line bisection and target cancellation. Similarly, specific subcomponents of the heterogeneous neglect syndrome, such as extinction and allocentric and personal neglect, are associated with distinct lesion sites. Future progress in delineating the neuropathological correlates of spatial neglect will depend upon the development of more refined measures of perceptual and cognitive functions than those currently available in the clinical setting.

  11. Accuracy Analysis for 6-DOF PKM with Sobol Sequence Based Quasi Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    Jianguang Li; Jian Ding; Lijie Guo; Yingxue Yao; Zhaohong Yi; Huaijing Jing; Honggen Fang

    2015-01-01

    To improve the precisions of pose error analysis for 6⁃dof parallel kinematic mechanism ( PKM) during assembly quality control, a Sobol sequence based on Quasi Monte Carlo ( QMC) method is introduced and implemented in pose accuracy analysis for the PKM in this paper. The Sobol sequence based on Quasi Monte Carlo with the regularity and uniformity of samples in high dimensions, can prevail traditional Monte Carlo method with up to 98�59% and 98�25% enhancement for computational precision of pose error statistics. Then a PKM tolerance design system integrating this method is developed and with it pose error distributions of the PKM within a prescribed workspace are finally obtained and analyzed.

  12. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  13. Scientific opinion on a quantitative pathway analysis of the likelihood ofTilletia indica M. introduction into EU with importation of US wheat

    DEFF Research Database (Denmark)

    Baker, R.; Candresse, T.; Dormannsné Simon, E.

    2010-01-01

    The European Commission requested EFSA to provide a scientific opinion on the USDA APHIS quantitative pathway analysis on likelihood of Karnal bunt introduction with importation of US wheat for grain into EU and desert durum wheat for grain into Italy. EFSA was also requested to indicate whether ...

  14. Estimating dynamic equilibrium economies: linear versus nonlinear likelihood

    OpenAIRE

    2004-01-01

    This paper compares two methods for undertaking likelihood-based inference in dynamic equilibrium economies: a sequential Monte Carlo filter proposed by Fernández-Villaverde and Rubio-Ramírez (2004) and the Kalman filter. The sequential Monte Carlo filter exploits the nonlinear structure of the economy and evaluates the likelihood function of the model by simulation methods. The Kalman filter estimates a linearization of the economy around the steady state. The authors report two main results...

  15. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method

    Institute of Scientific and Technical Information of China (English)

    XI Jia-mi; YANG Geng-she

    2008-01-01

    Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.

  16. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model

    OpenAIRE

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event co...

  17. Maximum likelihood estimator of operational modal analysis for linear time-varying structures in time-frequency domain

    Science.gov (United States)

    Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li

    2014-05-01

    This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.

  18. Markov Chain Monte Carlo Joint Analysis of Chandra X-Ray Imaging Spectroscopy and Sunyaev-Zel'dovich Effect Data

    Science.gov (United States)

    Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.

    2004-01-01

    X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.

  19. Rising Above Chaotic Likelihoods

    CERN Document Server

    Du, Hailiang

    2014-01-01

    Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...

  20. Estimating nonlinear dynamic equilibrium economies: a likelihood approach

    OpenAIRE

    2004-01-01

    This paper presents a framework to undertake likelihood-based inference in nonlinear dynamic equilibrium economies. The authors develop a sequential Monte Carlo algorithm that delivers an estimate of the likelihood function of the model using simulation methods. This likelihood can be used for parameter estimation and for model comparison. The algorithm can deal both with nonlinearities of the economy and with the presence of non-normal shocks. The authors show consistency of the estimate and...

  1. Errors associated with metabolic control analysis. Application Of Monte-Carlo simulation of experimental data.

    Science.gov (United States)

    Ainscow, E K; Brand, M D

    1998-09-21

    The errors associated with experimental application of metabolic control analysis are difficult to assess. In this paper, we give examples where Monte-Carlo simulations of published experimental data are used in error analysis. Data was simulated according to the mean and error obtained from experimental measurements and the simulated data was used to calculate control coefficients. Repeating the simulation 500 times allowed an estimate to be made of the error implicit in the calculated control coefficients. In the first example, state 4 respiration of isolated mitochondria, Monte-Carlo simulations based on the system elasticities were performed. The simulations gave error estimates similar to the values reported within the original paper and those derived from a sensitivity analysis of the elasticities. This demonstrated the validity of the method. In the second example, state 3 respiration of isolated mitochondria, Monte-Carlo simulations were based on measurements of intermediates and fluxes. A key feature of this simulation was that the distribution of the simulated control coefficients did not follow a normal distribution, despite simulation of the original data being based on normal distributions. Consequently, the error calculated using simulation was greater and more realistic than the error calculated directly by averaging the original results. The Monte-Carlo simulations are also demonstrated to be useful in experimental design. The individual data points that should be repeated in order to reduce the error in the control coefficients can be highlighted.

  2. A maximum likelihood QTL analysis reveals common genome regions controlling resistance to Salmonella colonization and carrier-state

    Directory of Open Access Journals (Sweden)

    Thanh-Son Tran

    2012-05-01

    Full Text Available Abstract Background The serovars Enteritidis and Typhimurium of the Gram-negative bacterium Salmonella enterica are significant causes of human food poisoning. Fowl carrying these bacteria often show no clinical disease, with detection only established post-mortem. Increased resistance to the carrier state in commercial poultry could be a way to improve food safety by reducing the spread of these bacteria in poultry flocks. Previous studies identified QTLs for both resistance to carrier state and resistance to Salmonella colonization in the same White Leghorn inbred lines. Until now, none of the QTLs identified was common to the two types of resistance. All these analyses were performed using the F2 inbred or backcross option of the QTLExpress software based on linear regression. In the present study, QTL analysis was achieved using Maximum Likelihood with QTLMap software, in order to test the effect of the QTL analysis method on QTL detection. We analyzed the same phenotypic and genotypic data as those used in previous studies, which were collected on 378 animals genotyped with 480 genome-wide SNP markers. To enrich these data, we added eleven SNP markers located within QTLs controlling resistance to colonization and we looked for potential candidate genes co-localizing with QTLs. Results In our case the QTL analysis method had an important impact on QTL detection. We were able to identify new genomic regions controlling resistance to carrier-state, in particular by testing the existence of two segregating QTLs. But some of the previously identified QTLs were not confirmed. Interestingly, two QTLs were detected on chromosomes 2 and 3, close to the locations of the major QTLs controlling resistance to colonization and to candidate genes involved in the immune response identified in other, independent studies. Conclusions Due to the lack of stability of the QTLs detected, we suggest that interesting regions for further studies are those that were

  3. Comparative Criticality Analysis of Two Monte Carlo Codes on Centrifugal Atomizer: MCNPS and SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Kang, H-S; Jang, M-S; Kim, S-R [NESS, Daejeon (Korea, Republic of); Park, J-M; Kim, K-N [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    There are two well-known Monte Carlo codes for criticality analysis, MCNP5 and SCALE. MCNP5 is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical system as a main analysis code. SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. SCALE was conceived and funded by US NRC to perform standardized computer analysis for licensing evaluation and is used widely in the world. We performed a validation test of MCNP5 and a comparative analysis of Monte Carlo codes, MCNP5 and SCALE, in terms of the critical analysis of centrifugal atomizer. In the criticality analysis using MCNP5 code, we obtained the statistically reliable results by using a large number of source histories per cycle and performing of uncertainty analysis.

  4. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at lo...

  5. Mission Command Analysis Using Monte Carlo Tree Search

    Science.gov (United States)

    2013-06-14

    Luther King Drive, White Sands Missile Range, NM 88002-5502, 28 September 2011. REF-1 ... lessons learned: • When implementing MCTS into partially observable games, we must be able to produce a fully-specified state based on available information...Ohman. COMBATXXI, defined. COMBAT XXI online documentation, Training and Doctrine Command Analysis Center—White Sands Missile Range (TRAC-WSMR), Martin

  6. A Combined Maximum-likelihood Analysis of the High-energy Astrophysical Neutrino Flux Measured with IceCube

    Science.gov (United States)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Beiser, E.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Brown, A. M.; Buzinsky, N.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Christy, B.; Clark, K.; Classen, L.; Coenders, S.; Cowen, D. F.; Cruz Silva, A. H.; Daughhetee, J.; Davis, J. C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; Dumm, J. P.; Dunkman, M.; Eagan, R.; Eberhardt, B.; Ehrhardt, T.; Eichmann, B.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fahey, S.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Fuchs, T.; Gaisser, T. K.; Gaior, R.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Gier, D.; Gladstone, L.; Glagla, M.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Goodman, J. A.; Góra, D.; Grant, D.; Gretskov, P.; Groh, J. C.; Gross, A.; Ha, C.; Haack, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hansmann, B.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hellwig, D.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Holzapfel, K.; Homeier, A.; Hoshina, K.; Huang, F.; Huber, M.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jero, K.; Jurkovic, M.; Kaminsky, B.; Kappes, A.; Karg, T.; Karle, A.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kiryluk, J.; Kläs, J.; Klein, S. R.; Kohnen, G.; Kolanoski, H.; Konietz, R.; Koob, A.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, G.; Kroll, M.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leuner, J.; Lünemann, J.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Maruyama, R.; Mase, K.; Matis, H. S.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Middell, E.; Middlemas, E.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Palczewski, T.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Pütz, J.; Quinnan, M.; Rädel, L.; Rameez, M.; Rawlins, K.; Redl, P.; Reimann, R.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Richter, S.; Riedel, B.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ruzybayev, B.; Ryckbosch, D.; Saba, S. M.; Sabbatini, L.; Sander, H.-G.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Schatto, K.; Scheriau, F.; Schimp, M.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schukraft, A.; Schulte, L.; Seckel, D.; Seunarine, S.; Shanidze, R.; Smith, M. W. E.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stahlberg, M.; Stamatikos, M.; Stanev, T.; Stanisha, N. A.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stössl, A.; Strahler, E. A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Tosi, D.; Tselengidou, M.; Unger, E.; Usner, M.; Vallecorsa, S.; Vandenbroucke, J.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Veenkamp, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Whitehorn, N.; Wichary, C.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.; IceCube Collaboration

    2015-08-01

    Evidence for an extraterrestrial flux of high-energy neutrinos has now been found in multiple searches with the IceCube detector. The first solid evidence was provided by a search for neutrino events with deposited energies ≳ 30 TeV and interaction vertices inside the instrumented volume. Recent analyses suggest that the extraterrestrial flux extends to lower energies and is also visible with throughgoing, νμ-induced tracks from the Northern Hemisphere. Here, we combine the results from six different IceCube searches for astrophysical neutrinos in a maximum-likelihood analysis. The combined event sample features high-statistics samples of shower-like and track-like events. The data are fit in up to three observables: energy, zenith angle, and event topology. Assuming the astrophysical neutrino flux to be isotropic and to consist of equal flavors at Earth, the all-flavor spectrum with neutrino energies between 25 TeV and 2.8 PeV is well described by an unbroken power law with best-fit spectral index -2.50 ± 0.09 and a flux at 100 TeV of ({6.7}-1.2+1.1)× {10}-18 {{GeV}}-1 {{{s}}}-1 {{sr}}-1 {{cm}}-2. Under the same assumptions, an unbroken power law with index -2 is disfavored with a significance of 3.8σ (p = 0.0066%) with respect to the best fit. This significance is reduced to 2.1σ (p = 1.7%) if instead we compare the best fit to a spectrum with index -2 that has an exponential cut-off at high energies. Allowing the electron-neutrino flux to deviate from the other two flavors, we find a νe fraction of 0.18 ± 0.11 at Earth. The sole production of electron neutrinos, which would be characteristic of neutron-decay-dominated sources, is rejected with a significance of 3.6σ (p = 0.014%).

  7. SAFETY ANALYSIS AND RISK ASSESSMENT FOR BRIDGES HEALTH MONITORING WITH MONTE CARLO METHODS

    OpenAIRE

    2016-01-01

    With the increasing requirements of building safety in the past few decades, healthy monitoring and risk assessment of structures is of more and more importance. Especially since traffic loads are heavier, risk Assessment for bridges are essential. In this paper we take advantage of Monte Carlo Methods to analysis the safety of bridge and monitoring the destructive risk. One main goal of health monitoring is to reduce the risk of unexpected damage of artificial objects

  8. Further analysis of multilevel Monte Carlo methods for elliptic PDEs with random coefficients

    OpenAIRE

    Teckentrup, A. L.; Scheichl, R.; Giles, M. B.; Ullmann, E

    2012-01-01

    We consider the application of multilevel Monte Carlo methods to elliptic PDEs with random coefficients. We focus on models of the random coefficient that lack uniform ellipticity and boundedness with respect to the random parameter, and that only have limited spatial regularity. We extend the finite element error analysis for this type of equation, carried out recently by Charrier, Scheichl and Teckentrup, to more difficult problems, posed on non--smooth domains and with discontinuities in t...

  9. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    Energy Technology Data Exchange (ETDEWEB)

    Fensin, Michael L [Los Alamos National Laboratory; Tobin, Stephen J [Los Alamos National Laboratory; Swinhoe, Martyn T [Los Alamos National Laboratory; Menlove, Howard O [Los Alamos National Laboratory; Sandoval, Nathan P [Los Alamos National Laboratory

    2009-01-01

    assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.

  10. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    SUN ShuGuang; ZHANG XinSheng

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  11. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    Science.gov (United States)

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  12. Number of iterations needed in Monte Carlo Simulation using reliability analysis for tunnel supports

    Directory of Open Access Journals (Sweden)

    E. Bukaçi

    2016-06-01

    Full Text Available There are many methods in geotechnical engineering which could take advantage of Monte Carlo Simulation to establish probability of failure, since closed form solutions are almost impossible to use in most cases. The problem that arises with using Monte Carlo Simulation is the number of iterations needed for a particular simulation.This article will show why it’s important to calculate number of iterations needed for Monte Carlo Simulation used in reliability analysis for tunnel supports using convergence – confinement method. Number if iterations needed will be calculated with two methods. In the first method, the analyst has to accept a distribution function for the performance function. The other method suggested by this article is to calculate number of iterations based on the convergence of the factor the analyst is interested in the calculation. Reliability analysis will be performed for the diversion tunnel in Rrëshen, Albania, by using both methods mentioned and results will be confronted

  13. Canonical Least-Squares Monte Carlo Valuation of American Options: Convergence and Empirical Pricing Analysis

    Directory of Open Access Journals (Sweden)

    Xisheng Yu

    2014-01-01

    Full Text Available The paper by Liu (2010 introduces a method termed the canonical least-squares Monte Carlo (CLM which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.

  14. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 2: Uncertainties due to reaction rates

    Science.gov (United States)

    Stolarski, R. S.; Butler, D. M.; Rundel, R. D.

    1977-01-01

    A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.

  15. Monte Carlo Calculation for Landmine Detection using Prompt Gamma Neutron Activation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungil; Kim, Seong Bong; Yoo, Suk Jae [Plasma Technology Research Center, Gunsan (Korea, Republic of); Shin, Sung Gyun; Cho, Moohyun [POSTECH, Pohang (Korea, Republic of); Han, Seunghoon; Lim, Byeongok [Samsung Thales, Yongin (Korea, Republic of)

    2014-05-15

    Identification and demining of landmines are a very important issue for the safety of the people and the economic development. To solve the issue, several methods have been proposed in the past. In Korea, National Fusion Research Institute (NFRI) is developing a landmine detector using prompt gamma neutron activation analysis (PGNAA) as a part of the complex sensor-based landmine detection system. In this paper, the Monte Carlo calculation results for this system are presented. Monte Carlo calculation was carried out for the design of the landmine detector using PGNAA. To consider the soil effect, average soil composition is analyzed and applied to the calculation. This results has been used to determine the specification of the landmine detector.

  16. Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission

    Science.gov (United States)

    Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  17. Monte Carlo Analysis as a Trajectory Design Driver for the Transiting Exoplanet Survey Satellite (TESS) Mission

    Science.gov (United States)

    Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  18. Dwarf spheroidal J-factors without priors: A likelihood-based analysis for indirect dark matter searches

    Science.gov (United States)

    Chiappo, A.; Cohen-Tanugi, J.; Conrad, J.; Strigari, L. E.; Anderson, B.; Sánchez-Conde, M. A.

    2017-04-01

    Line-of-sight integrals of the squared density, commonly called the J-factor, are essential for inferring dark matter (DM) annihilation signals. The J-factors of DM-dominated dwarf spheroidal satellite galaxies (dSphs) have typically been derived using Bayesian techniques, which for small data samples implies that a choice of priors constitutes a non-negligible systematic uncertainty. Here we report the development of a new fully frequentist approach to construct the profile likelihood of the J-factor. Using stellar kinematic data from several classical and ultra-faint dSphs, we derive the maximum likelihood value for the J-factor and its confidence intervals. We validate this method, in particular its bias and coverage, using simulated data from the Gaia Challenge. We find that the method possesses good statistical properties. The J-factors and their uncertainties are generally in good agreement with the Bayesian-derived values, with the largest deviations restricted to the systems with the smallest kinematic data sets. We discuss improvements, extensions, and future applications of this technique.

  19. Dwarf spheroidal J-factors without priors: A likelihood-based analysis for indirect dark matter searches

    CERN Document Server

    Chiappo, A; Conrad, J; Strigari, L E; Anderson, B; Sanchez-Conde, M A

    2016-01-01

    Line-of-sight integrals of the squared density, commonly called the J-factor, are essential for inferring dark matter annihilation signals. The J-factors of dark matter-dominated dwarf spheroidal satellite galaxies (dSphs) have typically been derived using Bayesian techniques, which for small data samples implies that a choice of priors constitutes a non-negligible systematic uncertainty. Here we report the development of a new fully frequentist approach to construct the profile likelihood of the J-factor. Using stellar kinematic data from several classical and ultra-faint dSphs, we derive the maximum likelihood value for the J-factor and its confidence intervals. We validate this method, in particular its bias and coverage, using simulated data from the Gaia Challenge. We find that the method possesses good statistical properties. The J-factors and their uncertainties are generally in good agreement with the Bayesian-derived values, with the largest deviations restricted to the systems with the smallest kine...

  20. Monte-Carlo Analysis of the Flavour Changing Neutral Current B \\to Gamma at Babar

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D. [Imperial College, London (United Kingdom)

    2001-09-01

    The main theme of this thesis is a Monte-Carlo analysis of the rare Flavour Changing Neutral Current (FCNC) decay b→sγ. The analysis develops techniques that could be applied to real data, to discriminate between signal and background events in order to make a measurement of the branching ratio of this rare decay using the BaBar detector. Also included in this thesis is a description of the BaBar detector and the work I have undertaken in the development of the electronic data acquisition system for the Electromagnetic calorimeter (EMC), a subsystem of the BaBar detector.

  1. First Monte Carlo analysis of fragmentation functions from single-inclusive e+e- annihilation

    Science.gov (United States)

    Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; Hirai, M.; Kumano, S.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-12-01

    We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive e+e- annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well constrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.

  2. First Monte Carlo analysis of fragmentation functions from single-inclusive $e^+ e^-$ annihilation

    CERN Document Server

    Sato, N; Melnitchouk, W; Hirai, M; Kumano, S; Accardi, A

    2016-01-01

    We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.

  3. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    Science.gov (United States)

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  4. Full-Band Monte Carlo Analysis of Hot-Carrier Light Emission in GaAs

    Science.gov (United States)

    Ferretti, I.; Abramo, A.; Brunetti, R.; Jacobini, C.

    1997-11-01

    A computational analysis of light emission from hot carriers in GaAs due to direct intraband conduction-conduction (c-c) transitions is presented. The emission rates have been evaluated by means of a Full-Band Monte-Carlo simulator (FBMC). Results have been obtained for the emission rate as a function of the photon energy, for the emitted and absorbed light polarization along and perpendicular to the electric field direction. Comparison has been made with available experimental data in MESFETs.

  5. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  6. A Nuclear Ribosomal DNA Phylogeny of Acer Inferred with Maximum Likelihood, Splits Graphs, and Motif Analysis of 606 Sequences

    Directory of Open Access Journals (Sweden)

    Guido W. Grimm

    2006-01-01

    Full Text Available The multi-copy internal transcribed spacer (ITS region of nuclear ribosomal DNA is widely used to infer phylogenetic relationships among closely related taxa. Here we use maximum likelihood (ML and splits graph analyses to extract phylogenetic information from ~ 600 mostly cloned ITS sequences, representing 81 species and subspecies of Acer, and both species of its sister Dipteronia. Additional analyses compared sequence motifs in Acer and several hundred Anacardiaceae, Burseraceae, Meliaceae, Rutaceae, and Sapindaceae ITS sequences in GenBank. We also assessed the effects of using smaller data sets of consensus sequences with ambiguity coding (accounting for within-species variation instead of the full (partly redundant original sequences. Neighbor-nets and bipartition networks were used to visualize conflict among character state patterns. Species clusters observed in the trees and networks largely agree with morphology-based classifications; of de Jong’s (1994 16 sections, nine are supported in neighbor-net and bipartition networks, and ten by sequence motifs and the ML tree; of his 19 series, 14 are supported in networks, motifs, and the ML tree. Most nodes had higher bootstrap support with matrices of 105 or 40 consensus sequences than with the original matrix. Within-taxon ITS divergence did not differ between diploid and polyploid Acer, and there was little evidence of differentiated parental ITS haplotypes, suggesting that concerted evolution in Acer acts rapidly.

  7. Diagnostic Value of Run Chart Analysis: Using Likelihood Ratios to Compare Run Chart Rules on Simulated Data Series

    Science.gov (United States)

    Anhøj, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549

  8. A maximum likelihood estimator for bedrock fracture transmissivities and its application to the analysis and design of borehole hydraulic tests

    Science.gov (United States)

    West, Anthony C. F.; Novakowski, Kent S.; Gazor, Saeed

    2006-06-01

    We propose a new method to estimate the transmissivities of bedrock fractures from transmissivities measured in intervals of fixed length along a borehole. We define the scale of a fracture set by the inverse of the density of the Poisson point process assumed to represent their locations along the borehole wall, and we assume a lognormal distribution for their transmissivities. The parameters of the latter distribution are estimated by maximizing the likelihood of a left-censored subset of the data where the degree of censorship depends on the scale of the considered fracture set. We applied the method to sets of interval transmissivities simulated by summing random fracture transmissivities drawn from a specified population. We found the estimated distributions compared well to the transmissivity distributions of similarly scaled subsets of the most transmissive fractures from among the specified population. Estimation accuracy was most sensitive to the variance in the transmissivities of the fracture population. Using the proposed method, we estimated the transmissivities of fractures at increasing scale from hydraulic test data collected at a fixed scale in Smithville, Ontario, Canada. This is an important advancement since the resultant curves of transmissivity parameters versus fracture set scale would only previously have been obtainable from hydraulic tests conducted with increasing test interval length and with degrading equipment precision. Finally, on the basis of the properties of the proposed method, we propose guidelines for the design of fixed interval length hydraulic testing programs that require minimal prior knowledge of the rock.

  9. Equalized near maximum likelihood detector

    OpenAIRE

    2012-01-01

    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  10. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    Science.gov (United States)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  11. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    CERN Document Server

    Hoffmann, Max J; Matera, Sebastian

    2016-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree of Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrat...

  12. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying predicti...... was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method.......Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction....... Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed...

  13. Use of Monte Carlo simulations for cultural heritage X-ray fluorescence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Brunetti, Antonio, E-mail: brunetti@uniss.it [Polcoming Department, University of Sassari (Italy); Golosio, Bruno [Polcoming Department, University of Sassari (Italy); Schoonjans, Tom; Oliva, Piernicola [Chemical and Pharmaceutical Department, University of Sassari (Italy)

    2015-06-01

    The analytical study of Cultural Heritage objects often requires merely a qualitative determination of composition and manufacturing technology. However, sometimes a qualitative estimate is not sufficient, for example when dealing with multilayered metallic objects. Under such circumstances a quantitative estimate of the chemical contents of each layer is sometimes required in order to determine the technology that was used to produce the object. A quantitative analysis is often complicated by the surface state: roughness, corrosion, incrustations that remain even after restoration, due to efforts to preserve the patina. Furthermore, restorers will often add a protective layer on the surface. In all these cases standard quantitative methods such as the fundamental parameter based approaches are generally not applicable. An alternative approach is presented based on the use of Monte Carlo simulations for quantitative estimation. - Highlights: • We present an application of fast Monte Carlo codes for Cultural Heritage artifact analysis. • We show applications to complex multilayer structures. • The methods allow estimating both the composition and the thickness of multilayer, such as bronze with patina. • The performance in terms of accuracy and uncertainty is described for the bronze samples.

  14. ANALYSIS OF INNOVATIVE ACTIVITY OF METALLURGICAL COMPANIES USING MONTE-CARLO MATHEMATICAL MODEL-ING METHOD

    Directory of Open Access Journals (Sweden)

    Shchekoturova S. D.

    2015-04-01

    Full Text Available The article presents an analysis of an innovative activity of four Russian metallurgical enterprises: "Ruspolimet", JSC "Ural Smithy", JSC "Stupino Metallurgical Company", JSC "VSMPO" via mathematical modeling using Monte Carlo method. The results of the assessment of innovative activity of Russian metallurgical companies were identified in five years dynamics. An assessment of the current innovative activity was made by the calculation of an integral index of the innovative activity. The calculation was based on such six indicators as the proportion of staff employed in R & D; the level of development of new technology; the degree of development of new products; share of material resources for R & D; degree of security of enterprise intellectual property; the share of investment in innovative projects and it was analyzed from 2007 to 2011. On the basis of this data the integral indicator of the innovative activity of metallurgical companies was calculated by well-known method of weighting coefficients. The comparative analysis of integral indicators of the innovative activity of considered companies made it possible to range their level of the innovative activity and to characterize the current state of their business. Based on Monte Carlo method a variation interval of the integral indicator was obtained and detailed instructions to choose the strategy of the innovative development of metallurgical enterprises were given as well

  15. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  16. Core-scale solute transport model selection using Monte Carlo analysis

    CERN Document Server

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  17. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Energy Technology Data Exchange (ETDEWEB)

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  18. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Science.gov (United States)

    Pratama, Cecep; Meilano, Irwan; Nugraha, Andri Dian

    2015-04-01

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 - 0.8465 g with uncertainty between 0.0847 - 0.2389 g and COV between 17.7% - 29.8%.

  19. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  20. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  1. Maximum Likelihood Associative Memories

    OpenAIRE

    Gripon, Vincent; Rabbat, Michael

    2013-01-01

    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  2. Converting Boundary Representation Solid Models to Half-Space Representation Models for Monte Carlo Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Davis JE, Eddy MJ, Sutton TM, Altomari TJ

    2007-03-01

    Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.

  3. System Level Numerical Analysis of a Monte Carlo Simulation of the E. Coli Chemotaxis

    CERN Document Server

    Siettos, Constantinos I

    2010-01-01

    Over the past few years it has been demonstrated that "coarse timesteppers" establish a link between traditional numerical analysis and microscopic/ stochastic simulation. The underlying assumption of the associated lift-run-restrict-estimate procedure is that macroscopic models exist and close in terms of a few governing moments of microscopically evolving distributions, but they are unavailable in closed form. This leads to a system identification based computational approach that sidesteps the necessity of deriving explicit closures. Two-level codes are constructed; the outer code performs macroscopic, continuum level numerical tasks, while the inner code estimates -through appropriately initialized bursts of microscopic simulation- the quantities required for continuum numerics. Such quantities include residuals, time derivatives, and the action of coarse slow Jacobians. We demonstrate how these coarse timesteppers can be applied to perform equation-free computations of a kinetic Monte Carlo simulation of...

  4. Ligand-receptor binding kinetics in surface plasmon resonance cells: A Monte Carlo analysis

    CERN Document Server

    Carroll, Jacob; Forsten-Williams, Kimberly; Täuber, Uwe C

    2016-01-01

    Surface plasmon resonance (SPR) chips are widely used to measure association and dissociation rates for the binding kinetics between two species of chemicals, e.g., cell receptors and ligands. It is commonly assumed that ligands are spatially well mixed in the SPR region, and hence a mean-field rate equation description is appropriate. This approximation however ignores the spatial fluctuations as well as temporal correlations induced by multiple local rebinding events, which become prominent for slow diffusion rates and high binding affinities. We report detailed Monte Carlo simulations of ligand binding kinetics in an SPR cell subject to laminar flow. We extract the binding and dissociation rates by means of the techniques frequently employed in experimental analysis that are motivated by the mean-field approximation. We find major discrepancies in a wide parameter regime between the thus extracted rates and the known input simulation values. These results underscore the crucial quantitative importance of s...

  5. Analysis of Far-Field Radiation from Apertures Using Monte Carlo Integration Technique

    Directory of Open Access Journals (Sweden)

    Mohammad Mehdi Fakharian

    2014-12-01

    Full Text Available An integration technique based on the use of Monte Carlo Integration (MCI is proposed for the analysis of the electromagnetic radiation from apertures. The technique that can be applied to the calculation of the aperture antenna radiation patterns is the equivalence principle followed by physical optics, which can then be used to compute far-field antenna radiation patterns. However, this technique is often complex mathematically, because it requires integration over the closed surface. This paper presents an extremely simple formulation to calculate the far-fields from some types of aperture radiators by using MCI technique. The accuracy and effectiveness of this technique are demonstrated in three cases of radiation from the apertures and results are compared with the solutions using FE simulation and Gaussian quadrature rules.

  6. Heat-Flux Analysis of Solar Furnace Using the Monte Carlo Ray-Tracing Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyun Jin; Kim, Jong Kyu; Lee, Sang Nam; Kang, Yong Heack [Korea Institute of Energy Research, Daejeon (Korea, Republic of)

    2011-10-15

    An understanding of the concentrated solar flux is critical for the analysis and design of solar-energy-utilization systems. The current work focuses on the development of an algorithm that uses the Monte Carlo ray-tracing method with excellent flexibility and expandability; this method considers both solar limb darkening and the surface slope error of reflectors, thereby analyzing the solar flux. A comparison of the modeling results with measurements at the solar furnace in Korea Institute of Energy Research (KIER) show good agreement within a measurement uncertainty of 10%. The model evaluates the concentration performance of the KIER solar furnace with a tracking accuracy of 2 mrad and a maximum attainable concentration ratio of 4400 sun. Flux variations according to measurement position and flux distributions depending on acceptance angles provide detailed information for the design of chemical reactors or secondary concentrators.

  7. Quantum Monte Carlo for Noncovalent Interactions: Analysis of Protocols and Simplified Scheme Attaining Benchmark Accuracy

    CERN Document Server

    Dubecký, Matúš; Jurečka, Petr; Mitas, Lubos; Hobza, Pavel; Otyepka, Michal

    2014-01-01

    Reliable theoretical predictions of noncovalent interaction energies, which are important e.g. in drug-design and hydrogen-storage applications, belong to longstanding challenges of contemporary quantum chemistry. In this respect, the fixed-node diffusion Monte Carlo (FN-DMC) is a promising alternative to the commonly used ``gold standard'' coupled-cluster CCSD(T)/CBS method for its benchmark accuracy and favourable scaling, in contrast to other correlated wave function approaches. This work is focused on the analysis of protocols and possible tradeoffs for FN-DMC estimations of noncovalent interaction energies and proposes a significantly more efficient yet accurate computational protocol using simplified explicit correlation terms. Its performance is illustrated on a number of weakly bound complexes, including water dimer, benzene/hydrogen, T-shape benzene dimer and stacked adenine-thymine DNA base pair complex. The proposed protocol achieves excellent agreement ($\\sim$0.2 kcal/mol) with respect to the reli...

  8. Quasi-Monte Carlo Simulation-Based SFEM for Slope Reliability Analysis

    Institute of Scientific and Technical Information of China (English)

    Yu Yuzhen; Xie Liquan; Zhang Bingyin

    2005-01-01

    Considering the stochastic spatial variation of geotechnical parameters over the slope, a Stochastic Finite Element Method (SFEM) is established based on the combination of the Shear Strength Reduction (SSR) concept and quasi-Monte Carlo simulation. The shear strength reduction FEM is superior to the slice method based on the limit equilibrium theory in many ways, so it will be more powerful to assess the reliability of global slope stability when combined with probability theory. To illustrate the performance of the proposed method, it is applied to an example of simple slope. The results of simulation show that the proposed method is effective to perform the reliability analysis of global slope stability without presupposing a potential slip surface.

  9. Outlier detection in near-infrared spectroscopic analysis by using Monte Carlo cross-validation

    Institute of Scientific and Technical Information of China (English)

    LIU ZhiChao; CAI WenSheng; SHAO XueGuang

    2008-01-01

    An outlier detection method is proposed for near-infrared spectral analysis. The underlying philosophy of the method is that, in random test (Monte Carlo) cross-validation, the probability of outliers pre-senting in good models with smaller prediction residual error sum of squares (PRESS) or in bad mod-els with larger PRESS should be obviously different from normal samples. The method builds a large number of PLS models by using random test cross-validation at first, then the models are sorted by the PRESS, and at last the outliers are recognized according to the accumulative probability of each sam-ple in the sorted models. For validation of the proposed method, four data sets, including three pub-lished data sets and a large data set of tobacco lamina, were investigated. The proposed method was proved to be highly efficient and veracious compared with the conventional leave-one-out (LOO) cross validation method.

  10. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms.

    Science.gov (United States)

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time.

  11. Outlier detection in near-infrared spectroscopic analysis by using Monte Carlo cross-validation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An outlier detection method is proposed for near-infrared spectral analysis. The underlying philosophy of the method is that,in random test(Monte Carlo) cross-validation,the probability of outliers presenting in good models with smaller prediction residual error sum of squares(PRESS) or in bad models with larger PRESS should be obviously different from normal samples. The method builds a large number of PLS models by using random test cross-validation at first,then the models are sorted by the PRESS,and at last the outliers are recognized according to the accumulative probability of each sample in the sorted models. For validation of the proposed method,four data sets,including three published data sets and a large data set of tobacco lamina,were investigated. The proposed method was proved to be highly efficient and veracious compared with the conventional leave-one-out(LOO) cross validation method.

  12. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr [Korea Advanced Institute of Science and Technology 291 Daehak-ro, Yuseong-gu, Daejeon, Korea 305-701 (Korea, Republic of)

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  13. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    Science.gov (United States)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin

    2015-12-01

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  14. Techno-economic and Monte Carlo probabilistic analysis of microalgae biofuel production system.

    Science.gov (United States)

    Batan, Liaw Y; Graff, Gregory D; Bradley, Thomas H

    2016-11-01

    This study focuses on the characterization of the technical and economic feasibility of an enclosed photobioreactor microalgae system with annual production of 37.85 million liters (10 million gallons) of biofuel. The analysis characterizes and breaks down the capital investment and operating costs and the production cost of unit of algal diesel. The economic modelling shows total cost of production of algal raw oil and diesel of $3.46 and $3.69 per liter, respectively. Additionally, the effects of co-products' credit and their impact in the economic performance of algal-to-biofuel system are discussed. The Monte Carlo methodology is used to address price and cost projections and to simulate scenarios with probabilities of financial performance and profits of the analyzed model. Different markets for allocation of co-products have shown significant shifts for economic viability of algal biofuel system.

  15. Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory

    CERN Document Server

    Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O

    2014-01-01

    We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.

  16. Improving Markov Chain Monte Carlo algorithms in LISA Pathfinder Data Analysis

    Science.gov (United States)

    Karnesis, N.; Nofrarias, M.; Sopuerta, C. F.; Lobo, A.

    2012-06-01

    The LISA Pathfinder mission (LPF) aims to test key technologies for the future LISA mission. The LISA Technology Package (LTP) on-board LPF will consist of an exhaustive suite of experiments and its outcome will be crucial for the future detection of gravitational waves. In order to achieve maximum sensitivity, we need to have an understanding of every instrument on-board and parametrize the properties of the underlying noise models. The Data Analysis team has developed algorithms for parameter estimation of the system. A very promising one implemented for LISA Pathfinder data analysis is the Markov Chain Monte Carlo. A series of experiments are going to take place during flight operations and each experiment is going to provide us with essential information for the next in the sequence. Therefore, it is a priority to optimize and improve our tools available for data analysis during the mission. Using a Bayesian framework analysis allows us to apply prior knowledge for each experiment, which means that we can efficiently use our prior estimates for the parameters, making the method more accurate and significantly faster. This, together with other algorithm improvements, will lead us to our main goal, which is no other than creating a robust and reliable tool for parameter estimation during the LPF mission.

  17. Empirical Markov Chain Monte Carlo Bayesian analysis of fMRI data.

    Science.gov (United States)

    de Pasquale, F; Del Gratta, C; Romani, G L

    2008-08-01

    In this work an Empirical Markov Chain Monte Carlo Bayesian approach to analyse fMRI data is proposed. The Bayesian framework is appealing since complex models can be adopted in the analysis both for the image and noise model. Here, the noise autocorrelation is taken into account by adopting an AutoRegressive model of order one and a versatile non-linear model is assumed for the task-related activation. Model parameters include the noise variance and autocorrelation, activation amplitudes and the hemodynamic response function parameters. These are estimated at each voxel from samples of the Posterior Distribution. Prior information is included by means of a 4D spatio-temporal model for the interaction between neighbouring voxels in space and time. The results show that this model can provide smooth estimates from low SNR data while important spatial structures in the data can be preserved. A simulation study is presented in which the accuracy and bias of the estimates are addressed. Furthermore, some results on convergence diagnostic of the adopted algorithm are presented. To validate the proposed approach a comparison of the results with those from a standard GLM analysis, spatial filtering techniques and a Variational Bayes approach is provided. This comparison shows that our approach outperforms the classical analysis and is consistent with other Bayesian techniques. This is investigated further by means of the Bayes Factors and the analysis of the residuals. The proposed approach applied to Blocked Design and Event Related datasets produced reliable maps of activation.

  18. Statistical Modification Analysis of Helical Planetary Gears based on Response Surface Method and Monte Carlo Simulation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun; GUO Fan

    2015-01-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  19. Core-scale solute transport model selection using Monte Carlo analysis

    Science.gov (United States)

    Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.

    2013-06-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.

  20. Neural signatures of social conformity: A coordinate-based activation likelihood estimation meta-analysis of functional brain imaging studies.

    Science.gov (United States)

    Wu, Haiyan; Luo, Yi; Feng, Chunliang

    2016-12-01

    People often align their behaviors with group opinions, known as social conformity. Many neuroscience studies have explored the neuropsychological mechanisms underlying social conformity. Here we employed a coordinate-based meta-analysis on neuroimaging studies of social conformity with the purpose to reveal the convergence of the underlying neural architecture. We identified a convergence of reported activation foci in regions associated with normative decision-making, including ventral striatum (VS), dorsal posterior medial frontal cortex (dorsal pMFC), and anterior insula (AI). Specifically, consistent deactivation of VS and activation of dorsal pMFC and AI are identified when people's responses deviate from group opinions. In addition, the deviation-related responses in dorsal pMFC predict people's conforming behavioral adjustments. These are consistent with current models that disagreement with others might evoke "error" signals, cognitive imbalance, and/or aversive feelings, which are plausibly detected in these brain regions as control signals to facilitate subsequent conforming behaviors. Finally, group opinions result in altered neural correlates of valuation, manifested as stronger responses of VS to stimuli endorsed than disliked by others.

  1. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi- tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some mild conditions. When the background driving Lévy process is of type A or B, we show that the intensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  2. Maximum Likelihood TDOA-FDOA Estimator Using Markov Chain Monte Carlo Sampling%基于马尔科夫键蒙特卡洛抽样的最大似然时差-频差联合估计算法

    Institute of Scientific and Technical Information of China (English)

    赵拥军; 赵勇胜; 赵闯

    2016-01-01

    This paper investigates the joint estimation of Time Difference Of Arrival (TDOA) and Frequency Difference Of Arrival (FDOA) in passive location system, where the true value of the reference signal is unknown. A novel Maximum Likelihood (ML) estimator of TDOA and FDOA is constructed, and Markov Chain Monte Carlo (MCMC) method is applied to finding the global maximum of likelihood function by generating the realizations of TDOA and FDOA. Unlike the Cross Ambiguity Function (CAF) algorithm or the Expectation Maximization (EM) algorithm, the proposed algorithm can also estimate the TDOA and FDOA of non-integer multiple of the sampling interval and has no dependence on the initial estimate. The Cramer Rao Lower Bound (CRLB) is also derived. Simulation results show that, the proposed algorithm outperforms the CAF and EM algorithm for different SNR conditions with higher accuracy and lower computational complexity.%该文针对无源定位中参考信号真实值未知的时差-频差联合估计问题,构建了一种新的时差-频差最大似然估计模型,并采用马尔科夫链蒙特卡洛(MCMC)方法求解似然函数的全局极大值,得到时差-频差联合估计.算法通过生成时差-频差样本,并统计样本均值得到估计值,克服了传统互模糊函数(CAF)算法只能得到时域和频域采样间隔整数倍估计值的问题,且不存在期望最大化(EM)等迭代算法的初值依赖和收敛问题.推导了时差-频差联合估计的克拉美罗界,并通过仿真实验表明,算法在不同信噪比条件下的估计精度优于CAF算法和EM算法,且计算复杂度较低.

  3. A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.

    Science.gov (United States)

    McCandless, Lawrence C; Gustafson, Paul

    2017-04-06

    Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    Science.gov (United States)

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci.

  5. Correlation Between Brain Activation Changes and Cognitive Improvement Following Cognitive Remediation Therapy in Schizophrenia: An Activation Likelihood Estimation Meta-analysis

    Institute of Scientific and Technical Information of China (English)

    Yan-Yan Wei; Ji-Jun Wang; Chao Yan; Zi-Qiang Li; Xiao Pan; Yi Cui; Tong Su

    2016-01-01

    Background:Several studies using functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have indicated that cognitive remediation therapy (CRT) might improve cognitive function by changing brain activations in patients with schizophrenia.However,the results were not consistent in these changed brain areas in different studies.The present activation likelihood estimation (ALE) meta-analysis was conducted to investigate whether cognitive function change was accompanied by the brain activation changes,and where the main areas most related to these changes were in schizophrenia patients after CRT.Analyses of whole-brain studies and whole-brain + region of interest (ROI) studies were compared to explore the effect of the different methodologies on the results.Methods:A computerized systematic search was conducted to collect fMRI and PET studies on brain activation changes in schizophrenia patients from pre-to post-CRT.Nine studies using fMRI techniques were included in the meta-analysis.Ginger ALE 2.3.1 was used to perform meta-analysis across these imaging studies.Results:The main areas with increased brain activation were in frontal and parietal lobe,including left medial frontal gyrus,left inferior frontal gyrus,right middle frontal gyrus,right postcentral gyrus,and inferior parietal lobule in patients after CRT,yet no decreased brain activation was found.Although similar increased activation brain areas were identified in ALE with or without ROI studies,analysis including ROI studies had a higher ALE value.Conclusions:The current findings suggest that CRT might improve the cognition of schizophrenia patients by increasing activations of the frontal and parietal lobe.In addition,it might provide more evidence to confirm results by including ROI studies in ALE meta-analysis.

  6. Penalized Splines for Smooth Representation of High-dimensional Monte Carlo Datasets

    CERN Document Server

    Whitehorn, Nathan; Lafebre, Sven

    2013-01-01

    Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.

  7. Augmented Likelihood Image Reconstruction.

    Science.gov (United States)

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  8. Derivation of landslide-triggering thresholds by Monte Carlo simulation and ROC analysis

    Science.gov (United States)

    Peres, David Johnny; Cancelliere, Antonino

    2015-04-01

    Rainfall thresholds of landslide-triggering are useful in early warning systems to be implemented in prone areas. Direct statistical analysis of historical records of rainfall and landslide data presents different shortcomings typically due to incompleteness of landslide historical archives, imprecise knowledge of the triggering instants, unavailability of a rain gauge located near the landslides, etc. In this work, a Monte Carlo approach to derive and evaluate landslide triggering thresholds is presented. Such an approach contributes to overcome some of the above mentioned shortcomings of direct empirical analysis of observed data. The proposed Monte Carlo framework consists in the combination of a rainfall stochastic model with hydrological and slope-stability model. Specifically, 1000-years long hourly synthetic rainfall and related slope stability factor of safety data are generated by coupling the Neyman-Scott rectangular pulses model with the TRIGRS unsaturated model (Baum et al., 2008) and a linear-reservoir water table recession model. Triggering and non-triggering rainfall events are then distinguished and analyzed to derive stochastic-input physically based thresholds that optimize the trade-off between correct and wrong predictions. For this purpose, receiver operating characteristic (ROC) indices are used. An application of the method to the highly landslide-prone area of the Peloritani mountains in north-eastern Sicily (Italy) is carried out. A threshold for the area is derived and successfully validated by comparison with thresholds proposed by other researchers. Moreover, the uncertainty in threshold derivation due to variability of rainfall intensity within events and to antecedent rainfall is investigated. Results indicate that variability of intensity during rainfall events influences significantly rainfall intensity and duration associated with landslide triggering. A representation of rainfall as constant-intensity hyetographs globally leads to

  9. Direct determination of intermolecular structure of ethanol adsorbed in micropores using X-ray diffraction and reverse Monte Carlo analysis

    OpenAIRE

    Iiyama, Taku; Hagi, Kousuke; Urushibara, Takafumi; Ozeki, Sumio

    2009-01-01

    The intermolecular structure of C(2)H(5)OH molecules confined in slit-shaped graphitic micropore of activated carbon fiber was investigated by in situ X-ray diffraction (XRD) measurement and reverse Monte Carlo (RMC) analysis. The pseudo-3-dimensional intermolecular structure Of C(2)H(5)OH adsorbed in the micropores was determined by applying the RMC analysis to XRD data, assuming a simple slit-shaped space composed of double graphene sheets. The results were consistent with conventional Mont...

  10. Quasi-Monte Carlo based global uncertainty and sensitivity analysis in modeling free product migration and recovery from petroleum-contaminated aquifers.

    Science.gov (United States)

    He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi

    2012-06-15

    This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested.

  11. The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, A.R., E-mail: siegela@mcs.anl.gov [Argonne National Laboratory, Nuclear Engineering Division (United States); Argonne National Laboratory, Mathematics and Computer Science Division (United States); Smith, K., E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Romano, P.K., E-mail: romano7@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Forget, B., E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Felker, K., E-mail: felker@mcs.anl.gov [Argonne National Laboratory, Mathematics and Computer Science Division (United States)

    2013-02-15

    A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes.

  12. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Slattery, Stuart R., E-mail: slatterysr@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Evans, Thomas M., E-mail: evanstm@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Wilson, Paul P.H., E-mail: wilsonp@engr.wisc.edu [University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)

    2015-12-15

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. In general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.

  13. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    Science.gov (United States)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  14. Maximum likelihood estimation for life distributions with competing failure modes

    Science.gov (United States)

    Sidik, S. M.

    1979-01-01

    The general model for the competing failure modes assuming that location parameters for each mode are expressible as linear functions of the stress variables and the failure modes act independently is presented. The general form of the likelihood function and the likelihood equations are derived for the extreme value distributions, and solving these equations using nonlinear least squares techniques provides an estimate of the asymptotic covariance matrix of the estimators. Monte-Carlo results indicate that, under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slightly biased, and the asymptotic covariances are rapidly approached.

  15. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... is implemented using markov chain Monte Carlo (MCMC) methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap or by consideration of posterior distributions in a Bayesian setting. Maximum likelihood estimation and Bayesian inference are compared...

  16. Modified maximum likelihood registration based on information fusion

    Institute of Scientific and Technical Information of China (English)

    Yongqing Qi; Zhongliang Jing; Shiqiang Hu

    2007-01-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  17. A Monte Carlo approach to Beryllium-7 solar neutrino analysis with KamLAND

    Science.gov (United States)

    Grant, Christopher Peter

    Terrestrial measurements of neutrinos produced by the Sun have been of great interest for over half a century because of their ability to test the accuracy of solar models. The first solar neutrinos detected with KamLAND provided a measurement of the 8B solar neutrino interaction rate above an analysis threshold of 5.5 MeV. This work describes efforts to extend KamLAND's detection sensitivity to solar neutrinos below 1 MeV, more specifically, those produced with an energy of 0.862 MeV from the 7Be electron-capture decay. Many of the difficulties in measuring solar neutrinos below 1 MeV arise from backgrounds caused abundantly by both naturally occurring, and man-made, radioactive nuclides. The primary nuclides of concern were 210Bi, 85Kr, and 39Ar. Since May of 2007, the KamLAND experiment has undergone two separate purification campaigns. During both campaigns a total of 5.4 ktons (about 6440 m3) of scintillator was circulated through a purification system, which utilized fractional distillation and nitrogen purging. After the purification campaign, reduction factors of 1.5 x 103 for 210Bi and 6.5 x 10 4 for 85Kr were observed. The reduction of the backgrounds provided a unique opportunity to observe the 7Be solar neutrino rate in KamLAND. An observation required detailed knowledge of the detector response at low energies, and to accomplish this, a full detector Monte Carlo simulation, called KLG4sim, was utilized. The optical model of the simulation was tuned to match the detector response observed in data after purification, and the software was optimized for the simulation of internal backgrounds used in the 7Be solar neutrino analysis. The results of this tuning and estimates from simulations of the internal backgrounds and external backgrounds caused by radioactivity on the detector components are presented. The first KamLAND analysis based on Monte Carlo simulations in the energy region below 2 MeV is shown here. The comparison of the chi2 between the null

  18. Performance Analysis of Korean Liquid metal type TBM based on Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, C. H.; Han, B. S.; Park, H. J.; Park, D. K. [Seoul National Univ., Seoul (Korea, Republic of)

    2007-01-15

    The objective of this project is to analyze a nuclear performance of the Korean HCML(Helium Cooled Molten Lithium) TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean HCML ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for analyzing the neutronic design of the Korean HCML TBM and the DEMO fusion blanket, and improving the nuclear performances. The results of the numerical experiments performed in this project will be utilized further for a design optimization of the Korean HCML TBM. In this project, Monte Carlo transport calculations for evaluating TBR (Tritium Breeding Ratio) and EMF (Energy Multiplication factor) were conducted to analyze a nuclear performance of the Korean HCML TBM. The activation characteristics and shielding performances for the Korean HCML TBM were analyzed using ORIGEN and MCCARD. We proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket, which was applied to the blanket analysis of a DEMO fusion reactor. In the results, the TBR of the Korean HCML ITER TBM is 0.1352 and the EMF is 1.362. Taking into account a limitation for the Li amount in ITER TBM, it is expected that tritium self-sufficiency condition can be satisfied through a change of the Li quantity and enrichment. In the results of activation and shielding analysis, the activity drops to 1.5% of the initial value and the decay heat drops to 0.02% of the initial amount after 10 years from plasma shutdown.

  19. Uncertainty analysis in the simulation of an HPGe detector using the Monte Carlo Code MCNP5

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo, Sergio; Pozuelo, Fausto; Querol, Andrea; Verdu, Gumersindo; Rodenas, Jose, E-mail: sergalbe@upv.es [Universitat Politecnica de Valencia, Valencia, (Spain). Instituto de Seguridad Industrial, Radiofisica y Medioambiental (ISIRYM); Ortiz, J. [Universitat Politecnica de Valencia, Valencia, (Spain). Servicio de Radiaciones. Lab. de Radiactividad Ambiental; Pereira, Claubia [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2013-07-01

    A gamma spectrometer including an HPGe detector is commonly used for environmental radioactivity measurements. Many works have been focused on the simulation of the HPGe detector using Monte Carlo codes such as MCNP5. However, the simulation of this kind of detectors presents important difficulties due to the lack of information from manufacturers and due to loss of intrinsic properties in aging detectors. Some parameters such as the active volume or the Ge dead layer thickness are many times unknown and are estimated during simulations. In this work, a detailed model of an HPGe detector and a petri dish containing a certified gamma source has been done. The certified gamma source contains nuclides to cover the energy range between 50 and 1800 keV. As a result of the simulation, the Pulse Height Distribution (PHD) is obtained and the efficiency curve can be calculated from net peak areas and taking into account the certified activity of the source. In order to avoid errors due to the net area calculation, the simulated PHD is treated using the GammaVision software. On the other hand, it is proposed to use the Noether-Wilks formula to do an uncertainty analysis of model with the main goal of determining the efficiency curve of this detector and its associated uncertainty. The uncertainty analysis has been focused on dead layer thickness at different positions of the crystal. Results confirm the important role of the dead layer thickness in the low energy range of the efficiency curve. In the high energy range (from 300 to 1800 keV) the main contribution to the absolute uncertainty is due to variations in the active volume. (author)

  20. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients.

    Directory of Open Access Journals (Sweden)

    Cosimo eUrgesi

    2014-05-01

    Full Text Available Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others’ actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others’ actions. The specific anatomical substrates of such neuropsychological deficits however are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding.

  1. Melanin and blood concentration in a human skin model studied by multiple regression analysis: assessment by Monte Carlo simulation

    Science.gov (United States)

    Shimada, M.; Yamada, Y.; Itoh, M.; Yatagai, T.

    2001-09-01

    Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.

  2. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  3. Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope

    Science.gov (United States)

    Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao

    2015-10-01

    X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through

  4. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...... d and b, and prove that they converge in distribution. We use the results to prove consistency of the maximum likelihood estimator for d,b in a large compact subset of {1/2...

  5. Monte Carlo analysis of a control technique for a tunable white lighting system

    DEFF Research Database (Denmark)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen

    2017-01-01

    A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...

  6. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    Science.gov (United States)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  7. PROMSAR: A backward Monte Carlo spherical RTM for the analysis of DOAS remote sensing measurements

    Science.gov (United States)

    Palazzi, E.; Petritoli, A.; Giovanelli, G.; Kostadinov, I.; Bortoli, D.; Ravegnani, F.; Sackey, S. S.

    A correct interpretation of diffuse solar radiation measurements made by Differential Optical Absorption Spectroscopy (DOAS) remote sensors require the use of radiative transfer models of the atmosphere. The simplest models consider radiation scattering in the atmosphere as a single scattering process. More realistic atmospheric models are those which consider multiple scattering and their application is useful and essential for the analysis of zenith and off-axis measurements regarding the lowest layers of the atmosphere, such as the boundary layer. These are characterized by the highest values of air density and quantities of particles and aerosols acting as scattering nuclei. A new atmospheric model, PROcessing of Multi-Scattered Atmospheric Radiation (PROMSAR), which includes multiple Rayleigh and Mie scattering, has recently been developed at ISAC-CNR. It is based on a backward Monte Carlo technique which is very suitable for studying the various interactions taking place in a complex and non-homogeneous system like the terrestrial atmosphere. PROMSAR code calculates the mean path of the radiation within each layer in which the atmosphere is sub-divided taking into account the large variety of processes that solar radiation undergoes during propagation through the atmosphere. This quantity is then employed to work out the Air Mass Factor (AMF) of several trace gases, to simulate in zenith and off-axis configurations their slant column amounts and to calculate the weighting functions from which informations about the gas vertical distribution is obtained using inversion methods. Results from the model, simulations and comparisons with actual slant column measurements are presented and discussed.

  8. A Meta-analysis on the neural basis of planning: Activation likelihood estimation of functional brain imaging results in the Tower of London task.

    Science.gov (United States)

    Nitschke, Kai; Köstering, Lena; Finkel, Lisa; Weiller, Cornelius; Kaller, Christoph P

    2017-01-01

    The ability to mentally design and evaluate series of future actions has often been studied in terms of planning abilities, commonly using well-structured laboratory tasks like the Tower of London (ToL). Despite a wealth of studies, findings on the specific localization of planning processes within prefrontal cortex (PFC) and on the hemispheric lateralization are equivocal. Here, we address this issue by integrating evidence from two different sources of data: First, we provide a systematic overview of the existing lesion data on planning in the ToL (10 studies, 211 patients) which does not indicate any evidence for a general lateralization of planning processes in (pre)frontal cortex. Second, we report a quantitative meta-analysis with activation likelihood estimation based on 31 functional neuroimaging datasets on the ToL. Separate meta-analyses of the activation patterns reported for Overall Planning (537 participants) and for Planning Complexity (182 participants) congruently show bilateral contributions of mid-dorsolateral PFC, frontal eye fields, supplementary motor area, precuneus, caudate, anterior insula, and inferior parietal cortex in addition to a left-lateralized involvement of rostrolateral PFC. In contrast to previous attributions of planning-related brain activity to the entire dorsolateral prefrontal cortex (dlPFC) and either its left or right homolog derived from single studies on the ToL, the present meta-analyses stress the pivotal role specifically of the mid-dorsolateral part of PFC (mid-dlPFC), presumably corresponding to Brodmann Areas 46 and 9/46, and strongly argue for a bilateral rather than lateralized involvement of the dlPFC in planning in the ToL. Hum Brain Mapp 38:396-413, 2017. © 2016 Wiley Periodicals, Inc.

  9. Cluster Monte Carlo and numerical mean field analysis for the water liquid-liquid phase transition

    Science.gov (United States)

    Mazza, Marco G.; Stokely, Kevin; Strekalova, Elena G.; Stanley, H. Eugene; Franzese, Giancarlo

    2009-04-01

    Using Wolff's cluster Monte Carlo simulations and numerical minimization within a mean field approach, we study the low temperature phase diagram of water, adopting a cell model that reproduces the known properties of water in its fluid phases. Both methods allow us to study the thermodynamic behavior of water at temperatures, where other numerical approaches - both Monte Carlo and molecular dynamics - are seriously hampered by the large increase of the correlation times. The cluster algorithm also allows us to emphasize that the liquid-liquid phase transition corresponds to the percolation transition of tetrahedrally ordered water molecules.

  10. Monte Carlo homogenized limit analysis model for randomly assembled blocks in-plane loaded

    Science.gov (United States)

    Milani, Gabriele; Lourenço, Paulo B.

    2010-11-01

    A simple rigid-plastic homogenization model for the limit analysis of masonry walls in-plane loaded and constituted by the random assemblage of blocks with variable dimensions is proposed. In the model, blocks constituting a masonry wall are supposed infinitely resistant with a Gaussian distribution of height and length, whereas joints are reduced to interfaces with frictional behavior and limited tensile and compressive strength. Block by block, a representative element of volume (REV) is considered, constituted by a central block interconnected with its neighbors by means of rigid-plastic interfaces. The model is characterized by a few material parameters, is numerically inexpensive and very stable. A sub-class of elementary deformation modes is a-priori chosen in the REV, mimicking typical failures due to joints cracking and crushing. Masonry strength domains are obtained equating the power dissipated in the heterogeneous model with the power dissipated by a fictitious homogeneous macroscopic plate. Due to the inexpensiveness of the approach proposed, Monte Carlo simulations can be repeated on the REV in order to have a stochastic estimation of in-plane masonry strength at different orientations of the bed joints with respect to external loads accounting for the geometrical statistical variability of blocks dimensions. Two cases are discussed, the former consisting on full stochastic REV assemblages (obtained considering a random variability of both blocks height an length) and the latter assuming the presence of a horizontal alignment along bed joints, i.e. allowing blocks height variability only row by row. The case of deterministic blocks height (quasi-periodic texture) can be obtained as a subclass of this latter case. Masonry homogenized failure surfaces are finally implemented in an upper bound FE limit analysis code for the analysis at collapse of entire walls in-plane loaded. Two cases of engineering practice, consisting on the prediction of the failure

  11. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    Science.gov (United States)

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators.

  12. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  13. Computer program uses Monte Carlo techniques for statistical system performance analysis

    Science.gov (United States)

    Wohl, D. P.

    1967-01-01

    Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.

  14. Inference in HIV dynamics models via hierarchical likelihood

    CERN Document Server

    Commenges, D; Putter, H; Thiebaut, R

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.

  15. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    Science.gov (United States)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  16. Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    GaoChunwen; XuJingzhen; RichardSinding-Larsen

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  17. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    Science.gov (United States)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC

  18. Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network

    Directory of Open Access Journals (Sweden)

    Klipp Edda

    2009-08-01

    Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can

  19. Quantification of Monte Carlo event generator scale-uncertainties with an example ATLAS analysis studying underlying event properties

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, Gerhard [University of Oxford (United Kingdom); Krauss, Frank [IPPP Durham (United Kingdom); Lacker, Heiko; Leyton, Michael; Mamach, Martin; Schulz, Holger; Weyh, Daniel [Humboldt University of Berlin (Germany)

    2012-07-01

    Monte Carlo (MC) event generators are widely employed in the analysis of experimental data also for LHC in order to predict the features of observables and test analyses with them. These generators rely on phenomenological models containing various parameters which are free in certain ranges. Variations of these parameters relative to their default lead to uncertainties on the predictions of the event generators and, in turn, on the results of any experimental data analysis making use of the event generator. A Generalized method for quantifying a certain class of these generator based uncertainties will be presented in this talk. We study for the SHERPA event generator the effect on the analysis results from uncertainties in the choice of the merging and factorization scale. The quantification is done within an example ATLAS analysis measuring underlying event UE properties in Z-boson production limited to low transverse momenta (p{sub T}{sup Z}<3 GeV) of the Z-boson. The analysis extracts event-shape distributions from charged particles in the event that do not belong to the Z decay for generate Monte Carlo event and data which are unfolded back to the generator level.

  20. 基于多维名义模型的加权似然估计方法%Weighted Likelihood Estimation Method Based on the Multidimensional Nominal Response Model

    Institute of Scientific and Technical Information of China (English)

    孙珊珊; 陶剑

    2008-01-01

    The Monte Carlo study evaluates the relative accuracy of Warm's (1989) weighted likelihood estimate (WLE) compared to the maximum likelihood estimate (MLE) using the nominal response model. And the results indicate that WLE was more accurate than MLE.

  1. Estimating Effective Elastic Thickness on Venus from Gravity and Topography: Robust Results from Multi-taper and Maximum-Likelihood Analysis

    Science.gov (United States)

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.

    2012-12-01

    Venus has undergone a markedly different evolution than Earth. Its tectonics do not resemble the plate-tectonic system observed on Earth, and many surface features—such as tesserae and coronae—lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere. Lithospheric parameters such as the effective elastic thickness have previously been estimated from the correlation between topography and gravity anomalies, either in the space domain or the spectral domain (where admittance or coherence functions are estimated). Correlation and spectral analyses that have been obtained on Venus have been limited by geometry (typically, only rectangular or circular data windows were used), and most have lacked robust error estimates. There are two levels of error: the first being how well the correlation, admittance or coherence can be estimated; the second and most important, how well the lithospheric elastic thickness can be estimated from those. The first type of error is well understood, via classical analyses of resolution, bias and variance in multivariate spectral analysis. Understanding this error leads to constructive approaches of performing the spectral analysis, via multi-taper methods (which reduce variance) with well-chosen optimal tapers (to reduce bias). The second type of error requires a complete analysis of the coupled system of differential equations that describes how certain inputs (the unobservable initial loading by topography at various interfaces) are being mapped to the output (final, measurable topography and gravity anomalies). The equations of flexure have one unknown: the flexural rigidity or effective elastic thickness—the parameter of interest. Fortunately, we have recently come to a full understanding of this second type of error, and derived a maximum-likelihood estimation (MLE) method that results in unbiased and minimum-variance estimates of the flexural rigidity under a variety of initial

  2. Monte Carlo analysis of the accelerator-driven system at Kyoto University Research Reactor Institute

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Won Kyeong; Lee, Deok Jung [Nuclear Engineering Division, Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); Lee, Hyun Chul [VHTR Technology Development Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Pyeon, Cheol Ho [Nuclear Engineering Science Division, Kyoto University Research Reactor Institute, Osaka (Japan); Shin, Ho Cheol [Core and Fuel Analysis Group, Korea Hydro and Nuclear Power Central Research Institute, Daejeon (Korea, Republic of)

    2016-04-15

    An accelerator-driven system consists of a subcritical reactor and a controllable external neutron source. The reactor in an accelerator-driven system can sustain fission reactions in a subcritical state using an external neutron source, which is an intrinsic safety feature of the system. The system can provide efficient transmutations of nuclear wastes such as minor actinides and long-lived fission products and generate electricity. Recently at Kyoto University Research Reactor Institute (KURRI; Kyoto, Japan), a series of reactor physics experiments was conducted with the Kyoto University Critical Assembly and a Cockcroft-Walton type accelerator, which generates the external neutron source by deuterium-tritium reactions. In this paper, neutronic analyses of a series of experiments have been re-estimated by using the latest Monte Carlo code and nuclear data libraries. This feasibility study is presented through the comparison of Monte Carlo simulation results with measurements.

  3. Evaluation of CASMO-3 and HELIOS for Fuel Assembly Analysis from Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Hyung Jin; Song, Jae Seung; Lee, Chung Chan

    2007-05-15

    This report presents a study comparing deterministic lattice physics calculations with Monte Carlo calculations for LWR fuel pin and assembly problems. The study has focused on comparing results from the lattice physics code CASMO-3 and HELIOS against those from the continuous-energy Monte Carlo code McCARD. The comparisons include k{sub inf}, isotopic number densities, and pin power distributions. The CASMO-3 and HELIOS calculations for the k{sub inf}'s of the LWR fuel pin problems show good agreement with McCARD within 956pcm and 658pcm, respectively. For the assembly problems with Gadolinia burnable poison rods, the largest difference between the k{sub inf}'s is 1463pcm with CASMO-3 and 1141pcm with HELIOS. RMS errors for the pin power distributions of CASMO-3 and HELIOS are within 1.3% and 1.5%, respectively.

  4. An analysis on the theory of pulse oximetry by Monte Carlo simulation

    Science.gov (United States)

    Fan, Shangchun; Cai, Rui; Xing, Weiwei; Liu, Changting; Chen, Guangfei; Wang, Junfeng

    2008-10-01

    The pulse oximetry is a kind of electronic instrument that measures the oxygen saturation of arterial blood and pulse rate by non-invasive techniques. It enables prompt recognition of hypoxemia. In a conventional transmittance type pulse oximeter, the absorption of light by oxygenated and reduced hemoglobin is measured at two wavelength 660nm and 940nm. But the accuracy and measuring range of the pulse oximeter can not meet the requirement of clinical application. There are limitations in the theory of pulse oximetry, which is proved by Monte Carlo method. The mean paths are calculated in the Monte Carlo simulation. The results prove that the mean paths are not the same between the different wavelengths.

  5. Monte Carlo Analysis of the Accelerator-Driven System at Kyoto University Research Reactor Institute

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2016-04-01

    Full Text Available An accelerator-driven system consists of a subcritical reactor and a controllable external neutron source. The reactor in an accelerator-driven system can sustain fission reactions in a subcritical state using an external neutron source, which is an intrinsic safety feature of the system. The system can provide efficient transmutations of nuclear wastes such as minor actinides and long-lived fission products and generate electricity. Recently at Kyoto University Research Reactor Institute (KURRI; Kyoto, Japan, a series of reactor physics experiments was conducted with the Kyoto University Critical Assembly and a Cockcroft–Walton type accelerator, which generates the external neutron source by deuterium–tritium reactions. In this paper, neutronic analyses of a series of experiments have been re-estimated by using the latest Monte Carlo code and nuclear data libraries. This feasibility study is presented through the comparison of Monte Carlo simulation results with measurements.

  6. Monte Carlo analysis of Gunn oscillations in narrow and wide band-gap asymmetric nanodiodes

    Science.gov (United States)

    González, T.; Iñiguez-de-la Torre, I.; Pardo, D.; Mateos, J.; Song, A. M.

    2009-11-01

    By means of Monte Carlo simulations we show the feasibility of asymmetric nonlinear planar nanodiodes for the development of Gunn oscillations. For channel lengths about 1 μm, oscillation frequencies around 100 GHz are predicted in InGaAs diodes, being significantly higher, around 400 GHz, in the case of GaN structures. The DC to AC conversion efficiency is found to be higher than 1% for the fundamental and second harmonic frequencies in GaN diodes.

  7. MONTE CARLO ANALYSIS FOR PREDICTION OF NOISE FROM A CONSTRUCTION SITE

    Directory of Open Access Journals (Sweden)

    Zaiton Haron

    2009-06-01

    Full Text Available The large number of operations involving noisy machinery associated with construction site activities result in considerable variation in the noise levels experienced at receiver locations. This paper suggests an approach to predict noise levels generated from a site by using a Monte Carlo approach. This approach enables the determination of details regarding the statistical uncertainties associated with noise level predictions or temporal distributions. This technique could provide the basis for a generalised prediction technique and a simple noise management tool.

  8. Monte Carlo analysis of a low power domino gate under parameter fluctuation

    Institute of Scientific and Technical Information of China (English)

    Wang Jinhui; Wu Wuchen; Gong Na; Hou Ligang; Peng Xiaohong; Gao Daming

    2009-01-01

    Using the multiple-parameter Monte Carlo method, the effectiveness of the dual threshold voltage technique (DTV) in low power domino logic design is analyzed. Simulation results indicate that under significant temperature and process fluctuations, DTV is still highly effective in reducing the total leakage and active power consumption for domino gates with speed loss. Also, regarding power and delay characteristics, different structure domino gates with DTV have different robustness against temperature and process fluctuation.

  9. A Markov chain Monte Carlo method family in incomplete data analysis

    Directory of Open Access Journals (Sweden)

    Vasić Vladimir V.

    2003-01-01

    Full Text Available A Markov chain Monte Carlo method family is a collection of techniques for pseudorandom draws out of probability distribution function. In recent years, these techniques have been the subject of intensive interest of many statisticians. Roughly speaking, the essence of a Markov chain Monte Carlo method family is generating one or more values of a random variable Z, which is usually multidimensional. Let P(Z = f(Z denote a density function of a random variable Z, which we will refer to as a target distribution. Instead of sampling directly from the distribution f, we will generate [Z(1, Z(2..., Z(t,... ], in which each value is, in a way, dependant upon the previous value and where the stationary distribution will be a target distribution. For a sufficient value of t, Z(t will be approximately random sampling of the distribution f. A Markov chain Monte Carlo method family is useful when direct sampling is difficult, but when sampling of each value is not.

  10. Monte Carlo Ray Tracing Based Sensitivity Analysis of the Atmospheric and the Ocean Parameters on Top of the Atmosphere Radiance

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-01-01

    Full Text Available Monte Carlo Ray Tracing: MCRT based sensitivity analysis of the geophysical parameters (the atmosphere and the ocean on Top of the Atmosphere: TOA radiance in visible to near infrared wavelength regions is conducted. As the results, it is confirmed that the influence due to the atmosphere is greater than that of the ocean. Scattering and absorption due to aerosol particles and molecules in the atmosphere is major contribution followed by water vapor and ozone while scattering due to suspended solid is dominant contribution for the ocean parameters.

  11. On the likelihood of forests

    Science.gov (United States)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  12. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed...

  13. PFM Analysis for Pre-Existing Cracks on Alloy 182 Weld in PWR Primary Water Environment using Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jae Phil; Bahn, Chi Bum [Pusan National University, Busan (Korea, Republic of)

    2015-10-15

    Probabilistic Fracture Mechanics (PFM) analysis was generally used to consider the scatter and uncertainty of parameters in complex phenomenon. Weld defects could be present in weld regions of Pressurized Water Reactors (PWRs), which cannot be considered by the typical fracture mechanics analysis. It is necessary to evaluate the effects of the pre-existing cracks in welds for the integrity of the welds. In this paper, PFM analysis for pre-existing cracks on Alloy 182 weld in PWR primary water environment was carried out using a Monte Carlo simulation. PFM analysis for pre-existing cracks on Alloy 182 weld in PWR primary water environment was carried out. It was shown that inspection decreases the gradient of the failure probability. And failure probability caused by the pre-existing cracks was stabilized after 15 years of operation time in this input condition.

  14. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  15. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    Science.gov (United States)

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  16. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    Energy Technology Data Exchange (ETDEWEB)

    McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  17. The applicability of certain Monte Carlo methods to the analysis of interacting polymers

    Energy Technology Data Exchange (ETDEWEB)

    Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)

    1998-05-01

    The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

  18. The applicability of certain Monte Carlo methods to the analysis of interacting polymers

    Energy Technology Data Exchange (ETDEWEB)

    Krapp, Jr., Donald M. [Univ. of California, Berkeley, CA (United States)

    1998-05-01

    The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at βcrit ~ 0.99, and to recalculate the known value of the critical exponent η ~ 0.58 of the system for β = βcrit. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of η. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of βcrit using smaller values of N is 1.01 ± 0.01, and the estimate for η at this value of β is 0.59 ± 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

  19. Random vibration analysis of switching apparatus based on Monte Carlo method

    Institute of Scientific and Technical Information of China (English)

    ZHAI Guo-fu; CHEN Ying-hua; REN Wan-bin

    2007-01-01

    The performance in vibration environment of switching apparatus containing mechanical contact is an important element when judging the apparatus's reliability. A piecewise linear two-degrees-of-freedom mathematical model considering contact loss was built in this work, and the vibration performance of the model under random external Gaussian white noise excitation was investigated by using Monte Carlo simulation in Matlab/Simulink. Simulation showed that the spectral content and statistical characters of the contact force coincided strongly with reality. The random vibration character of the contact system was solved using time (numerical) domain simulation in this paper. Conclusions reached here are of great importance for reliability design of switching apparatus.

  20. Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.

    Science.gov (United States)

    Piels, Molly; Zibar, Darko

    2016-02-08

    The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation.

  1. Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation

    CERN Document Server

    Churmakov, D Y; Piletsky, S A; Greenhalgh, D A

    2003-01-01

    A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto- fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an effective depth.

  2. Coupled carrier-phonon nonequilibrium dynamics in terahertz quantum cascade lasers: a Monte Carlo analysis

    Science.gov (United States)

    Iotti, Rita C.; Rossi, Fausto

    2013-07-01

    The operation of state-of-the-art optoelectronic quantum devices may be significantly affected by the presence of a nonequilibrium quasiparticle population to which the carrier subsystem is unavoidably coupled. This situation is particularly evident in new-generation semiconductor-heterostructure-based quantum emitters, operating both in the mid-infrared as well as in the terahertz (THz) region of the electromagnetic spectrum. In this paper, we present a Monte Carlo-based global kinetic approach, suitable for the investigation of a combined carrier-phonon nonequilibrium dynamics in realistic devices, and discuss its application with a prototypical resonant-phonon THz emitting quantum cascade laser design.

  3. Monte Carlo analysis of electronic noise in semiconductors under sub-terahertz cyclostationary mixed fields

    Energy Technology Data Exchange (ETDEWEB)

    Capizzo, M.C.; Persano Adorno, D.; Zarcone, M. [Dipartimento di Fisica e Tecnologie Relative, Viale delle Scienze, Ed. 18, 90128, Palermo (Italy)

    2006-08-15

    This paper reports the results of Monte Carlo simulations of electronic noise in a GaAs bulk driven by two mixed high-frequency large-amplitude periodic electric fields. Under these conditions, the system response shows some peculiarities in the noise performance, such as a resonant-like enhancement of the spectra near the two frequencies of the applied fields. The relations among the frequency response and the velocity fluctuations as a function of intensities and frequencies of the sub-terahertz mixed excitation fields have been investigated. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  4. A Monte Carlo analysis of health risks from PCB-contaminated mineral oil transformer fires.

    Science.gov (United States)

    Eschenroeder, A Q; Faeder, E J

    1988-06-01

    The objective of this study is the estimation of health hazards due to the inhalation of combustion products from accidental mineral oil transformer fires. Calculations of production, dispersion, and subsequent human intake of polychlorinated dibenzofurans (PCDFs) provide us with exposure estimates. PCDFs are believed to be the principal toxic products of the pyrolysis of polychlorinated biphenyls (PCBs) sometimes found as contaminants in transformer mineral oil. Cancer burdens and birth defect hazard indices are estimated from population data and exposure statistics. Monte Carlo-derived variational factors emphasize the statistics of uncertainty in the estimates of risk parameters. Community health issues are addressed and risks are found to be insignificant.

  5. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  6. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    Science.gov (United States)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  7. A viable method for goodness-of-fit test in maximum likelihood fit

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng; GAO Yuan-Ning; HUO Lei

    2011-01-01

    A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood function if the efficiency function varies smoothly. We point out that the correlation coefficient can be estimated by the Monte Carlo technique. With the established method, two examples are given to illustrate the performance of the test statistic.

  8. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  9. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

    Science.gov (United States)

    Kim, Seock-Ho

    2001-01-01

    Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

  10. Cosmic bulk flows on 50 {h}^{-1}$Mpc scales: A Bayesian hyper-parameter method and multi-shells likelihood analysis

    CERN Document Server

    Ma, Yin-Zhe

    2012-01-01

    It has been argued recently that the galaxy peculiar velocity field provides evidence of excessive power on scales of $50\\hmpc$, which seems to be inconsistent with the standard $\\Lambda$CDM cosmological model. We discuss several assumptions and conventions used in studies of the large-scale bulk flow to check whether this claim is robust under a variety of conditions. Rather than using a composite catalogue we select samples from the SN, ENEAR, SFI++ and A1SN catalogues, and correct for Malmquist bias in each according to the IRAS PSCz density field. We also use slightly different assumptions about the small-scale velocity dispersion and the parameterisation of the matter power spectrum when calculating the variance of the bulk flow. By combining the likelihood of individual catalogues using a Bayesian hyper-parameter method, we find that the joint likelihood of the amplitude parameter gives $\\sigma_8=0.65^{+0.47}_{-0.35}(\\pm 1 \\sigma)$, which is entirely consistent with the $\\Lambda$CDM model. In addition, ...

  11. Monte-Carlo Simulator and Ancillary Response Generator of Suzaku XRT/XIS System for Spatially Extended Source Analysis

    CERN Document Server

    Ishisaki, Y; Fujimoto, R; Ozaki, M; Ebisawa, K; Takahashi, T; Ueda, Y; Ogasaka, Y; Ptak, A; Mukai, K; Hamaguchi, K; Hirayama, M; Kotani, T; Kubo, H; Shibata, R; Ebara, M; Furuzawa, A; Iizuka, R; Inoue, H; Mori, H; Okada, S; Yokoyama, Y; Matsumoto, H; Nakajima, H; Yamaguchi, H; Anabuki, N; Tawa, N; Nagai, M; Katsuda, S; Hayashida, K; Bamba, A; Miller, E D; Sato, K; Yamasaki, N Y

    2006-01-01

    We have developed a framework for the Monte-Carlo simulation of the X-Ray Telescopes (XRT) and the X-ray Imaging Spectrometers (XIS) onboard Suzaku, mainly for the scientific analysis of spatially and spectroscopically complex celestial sources. A photon-by-photon instrumental simulator is built on the ANL platform, which has been successfully used in ASCA data analysis. The simulator has a modular structure, in which the XRT simulation is based on a ray-tracing library, while the XIS simulation utilizes a spectral "Redistribution Matrix File" (RMF), generated separately by other tools. Instrumental characteristics and calibration results, e.g., XRT geometry, reflectivity, mutual alignments, thermal shield transmission, build-up of the contamination on the XIS optical blocking filters (OBF), are incorporated as completely as possible. Most of this information is available in the form of the FITS (Flexible Image Transport System) files in the standard calibration database (CALDB). This simulator can also be ut...

  12. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    Science.gov (United States)

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast.

  13. Lessons about likelihood functions from nuclear physics

    CERN Document Server

    Hanson, Kenneth M

    2007-01-01

    Least-squares data analysis is based on the assumption that the normal (Gaussian) distribution appropriately characterizes the likelihood, that is, the conditional probability of each measurement d, given a measured quantity y, p(d | y). On the other hand, there is ample evidence in nuclear physics of significant disagreements among measurements, which are inconsistent with the normal distribution, given their stated uncertainties. In this study the histories of 99 measurements of the lifetimes of five elementary particles are examined to determine what can be inferred about the distribution of their values relative to their stated uncertainties. Taken as a whole, the variations in the data are somewhat larger than their quoted uncertainties would indicate. These data strongly support using a Student t distribution for the likelihood function instead of a normal. The most probable value for the order of the t distribution is 2.6 +/- 0.9. It is shown that analyses based on long-tailed t-distribution likelihood...

  14. Comparative Monte Carlo analysis of InP- and GaN-based Gunn diodes

    Science.gov (United States)

    García, S.; Pérez, S.; Íñiguez-de-la-Torre, I.; Mateos, J.; González, T.

    2014-01-01

    In this work, we report on Monte Carlo simulations to study the capability to generate Gunn oscillations of diodes based on InP and GaN with around 1 μm active region length. We compare the power spectral density of current sequences in diodes with and without notch for different lengths and two doping profiles. It is found that InP structures provide 400 GHz current oscillations for the fundamental harmonic in structures without notch and around 140 GHz in notched diodes. On the other hand, GaN diodes can operate up to 300 GHz for the fundamental harmonic, and when the notch is effective, a larger number of harmonics, reaching the Terahertz range, with higher spectral purity than in InP diodes are generated. Therefore, GaN-based diodes offer a high power alternative for sub-millimeter wave Gunn oscillations.

  15. Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method

    Science.gov (United States)

    Boyd, Iain D.

    1991-01-01

    A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.

  16. Improving PWR core simulations by Monte Carlo uncertainty analysis and Bayesian inference

    CERN Document Server

    Castro, Emilio; Buss, Oliver; Garcia-Herranz, Nuria; Hoefer, Axel; Porsch, Dieter

    2016-01-01

    A Monte Carlo-based Bayesian inference model is applied to the prediction of reactor operation parameters of a PWR nuclear power plant. In this non-perturbative framework, high-dimensional covariance information describing the uncertainty of microscopic nuclear data is combined with measured reactor operation data in order to provide statistically sound, well founded uncertainty estimates of integral parameters, such as the boron letdown curve and the burnup-dependent reactor power distribution. The performance of this methodology is assessed in a blind test approach, where we use measurements of a given reactor cycle to improve the prediction of the subsequent cycle. As it turns out, the resulting improvement of the prediction quality is impressive. In particular, the prediction uncertainty of the boron letdown curve, which is of utmost importance for the planning of the reactor cycle length, can be reduced by one order of magnitude by including the boron concentration measurement information of the previous...

  17. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  18. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    Science.gov (United States)

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  19. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  20. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream

    Institute of Scientific and Technical Information of China (English)

    TIAN Bao-guo; SI Ji-tao; ZHAO Yan; WANG Hong-tao; HAO Ji-ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.

  1. Mass flow rate sensitivity and uncertainty analysis in natural circulation boiling water reactor core from Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa-Paredes, Gilberto, E-mail: gepe@xanum.uam.m [Area de Ingenieria en Recursos Energeticos, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco, 186, Col. Vicentina, Mexico D.F., 09340 (Mexico); Verma, Surendra P. [Centro de Investigacion en Energia, Universidad Nacional Autonoma de Mexico, Priv. Xochicalco s/no., Col Centro, Apartado Postal 34, Temixco 62580 (Mexico); Vazquez-Rodriguez, Alejandro [Area de Ingenieria en Recursos Energeticos, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco, 186, Col. Vicentina, Mexico D.F., 09340 (Mexico); Nunez-Carrera, Alejandro [Comision Nacional de Seguridad Nuclear y Salvaguardias, Doctor Barragan 779, Col. Narvarte, Mexico D.F. 03020 (Mexico)

    2010-05-15

    Our aim was to evaluate the sensitivity and uncertainty of mass flow rate in the core on the performance of natural circulation boiling water reactor (NCBWR). This analysis was carried out through Monte Carlo simulations of sizes up to 40,000, and the size, i.e., repetition of 25,000 was considered as valid for routine applications. A simplified boiling water reactor (SBWR) was used as an application example of Monte Carlo method. The numerical code to simulate the SBWR performance considers a one-dimensional thermo-hydraulics model along with non-equilibrium thermodynamics and non-homogeneous flow approximation, one-dimensional fuel rod heat transfer. The neutron processes were simulated with a point reactor kinetics model with six groups of delayed neutrons. The sensitivity was evaluated in terms of 99% confidence intervals of the mean to understand the range of mean values that may represent the entire statistical population of performance variables. The regression analysis with mass flow rate as the predictor variable showed statistically valid linear correlations for both neutron flux and fuel temperature and quadratic relationship for the void fraction. No statistically valid correlation was observed for the total heat flux as a function of the mass flow rate although heat flux at individual nodes was positively correlated with this variable. These correlations are useful for the study, analysis and design of any NCBWR. The uncertainties were propagated as follows: for 10% change in the mass flow rate in the core, the responses for neutron power, total heat flux, average fuel temperature and average void fraction changed by 8.74%, 7.77%, 2.74% and 0.58%, respectively.

  2. Bayesian experimental design for models with intractable likelihoods.

    Science.gov (United States)

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.

  3. Monte Carlo simulation applied to order economic analysis Simulação de Monte Carlo aplicada à análise econômica de pedido

    Directory of Open Access Journals (Sweden)

    Abraão Freires Saraiva Júnior

    2011-03-01

    Full Text Available The use of mathematical and statistical methods can help managers to deal with decision-making difficulties in the business environment. Some of these decisions are related to productive capacity optimization in order to obtain greater economic gains for the company. Within this perspective, this study aims to present the establishment of metrics to support economic decisions related to process or not orders in a company whose products have great variability in variable direct costs per unit that generates accounting uncertainties. To achieve this objective, is proposed a five-step method built from the integration of Management Accounting and Operations Research techniques, emphasizing the Monte Carlo simulation. The method is applied from a didactic example which uses real data achieved through a field research carried out in a plastic products industry that employ recycled material. Finally, it is concluded that the Monte Carlo simulation is effective for treating variable direct costs per unit variability and that the proposed method is useful to support decision-making related to order acceptance.A utilização de métodos matemáticos e estatísticos pode auxiliar gestores a lidar com dificuldades do processo de tomada de decisão no ambiente de negócios. Algumas dessas decisões estão relacionadas à otimização da utilização da capacidade produtiva visando a obtenção de melhores resultados econômicos para a empresa. Dentro dessa perspectiva, o presente trabalho objetiva apresentar o estabelecimento de métricas que deem suporte à decisão econômica de atender ou não a pedidos em uma empresa cujos produtos têm grande variabilidade de custos variáveis diretos unitários que gera incertezas contábeis. Para cumprir esse objetivo, é proposto um método em cinco etapas, construído a partir da integração de técnicas provindas da contabilidade gerencial e da pesquisa operacional, com destaque à simulação de Monte Carlo. O m

  4. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;

    2005-01-01

    There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...

  5. Monte Carlo Analysis of the Lévy Stability and Multi-fractal Spectrum in e+e- Collisions

    Institute of Scientific and Technical Information of China (English)

    陈刚; 刘连寿

    2002-01-01

    The Lévy stability analysis is carried out for e+e- collisions at Z0 mass using the Monte Carlo method. The Lévy index μ is found to be μ = 1.701 ± 0.043. The self-slmilar generalized dimensions D(q) and multi-fractal spectrum f(а) are presented. The Rényi dimension D(q) decreases with increasing q. The self-similar multifractal spectrum is a convex curve with a maximum at q = 0, а = 1.169 ± 0.011. The right-hand side of the spectrum, corresponding to negative values of q, is obtained through analytical continuation.

  6. A Monte Carlo study comparing PIV, ULS and DWLS in the estimation of dichotomous confirmatory factor analysis.

    Science.gov (United States)

    Nestler, Steffen

    2013-02-01

    We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non-normality (normal, moderately, and extremely non-normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS.

  7. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh

    2016-09-16

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.

  8. PDF Weaving - Linking Inventory Data and Monte Carlo Uncertainty Analysis in the Study of how Disturbance Affects Forest Carbon Storage

    Science.gov (United States)

    Healey, S. P.; Patterson, P.; Garrard, C.

    2014-12-01

    Altered disturbance regimes are likely a primary mechanism by which a changing climate will affect storage of carbon in forested ecosystems. Accordingly, the National Forest System (NFS) has been mandated to assess the role of disturbance (harvests, fires, insects, etc.) on carbon storage in each of its planning units. We have developed a process which combines 1990-era maps of forest structure and composition with high-quality maps of subsequent disturbance type and magnitude to track the impact of disturbance on carbon storage. This process, called the Forest Carbon Management Framework (ForCaMF), uses the maps to apply empirically calibrated carbon dynamics built into a widely used management tool, the Forest Vegetation Simulator (FVS). While ForCaMF offers locally specific insights into the effect of historical or hypothetical disturbance trends on carbon storage, its dependence upon the interaction of several maps and a carbon model poses a complex challenge in terms of tracking uncertainty. Monte Carlo analysis is an attractive option for tracking the combined effects of error in several constituent inputs as they impact overall uncertainty. Monte Carlo methods iteratively simulate alternative values for each input and quantify how much outputs vary as a result. Variation of each input is controlled by a Probability Density Function (PDF). We introduce a technique called "PDF Weaving," which constructs PDFs that ensure that simulated uncertainty precisely aligns with uncertainty estimates that can be derived from inventory data. This hard link with inventory data (derived in this case from FIA - the US Forest Service Forest Inventory and Analysis program) both provides empirical calibration and establishes consistency with other types of assessments (e.g., habitat and water) for which NFS depends upon FIA data. Results from the NFS Northern Region will be used to illustrate PDF weaving and insights gained from ForCaMF about the role of disturbance in carbon

  9. Analysis of large solid propellant rocket engine exhaust plumes using the direct simulation Monte Carlo method

    Science.gov (United States)

    Hueser, J. E.; Brock, F. J.; Melfi, L. T., Jr.; Bird, G. A.

    1984-01-01

    A new solution procedure has been developed to analyze the flowfield properties in the vicinity of the Inertial Upper Stage/Spacecraft during the 1st stage (SRMI) burn. Continuum methods are used to compute the nozzle flow and the exhaust plume flowfield as far as the boundary where the breakdown of translational equilibrium leaves these methods invalid. The Direct Simulation Monte Carlo (DSMC) method is applied everywhere beyond this breakdown boundary. The flowfield distributions of density, velocity, temperature, relative abundance, surface flux density, and pressure are discussed for each species for 2 sets of boundary conditions: vacuum and freestream. The interaction of the exhaust plume and the freestream with the spacecraft and the 2-stream direct interaction are discussed. The results show that the low density, high velocity, counter flowing free-stream substantially modifies the flowfield properties and the flux density incident on the spacecraft. A freestream bow shock is observed in the data, located forward of the high density region of the exhaust plume into which the freestream gas does not penetrate. The total flux density incident on the spacecraft, integrated over the SRM1 burn interval is estimated to be of the order of 10 to the 22nd per sq m (about 1000 atomic layers).

  10. Monte Carlo analysis for finite-temperature magnetism of Nd2Fe14B permanent magnet

    Science.gov (United States)

    Toga, Yuta; Matsumoto, Munehisa; Miyashita, Seiji; Akai, Hisazumi; Doi, Shotaro; Miyake, Takashi; Sakuma, Akimasa

    2016-11-01

    We investigate the effects of magnetic inhomogeneities and thermal fluctuations on the magnetic properties of a rare-earth intermetallic compound, Nd2Fe14B . The constrained Monte Carlo method is applied to a Nd2Fe14B bulk system to realize the experimentally observed spin reorientation and magnetic anisotropy constants KmA(m =1 ,2 ,4 ) at finite temperatures. Subsequently, it is found that the temperature dependence of K1A deviates from the Callen-Callen law, K1A(T ) ∝M (T) 3 , even above room temperature, TR˜300 K , when the Fe (Nd) anisotropy terms are removed to leave only the Nd (Fe) anisotropy terms. This is because the exchange couplings between Nd moments and Fe spins are much smaller than those between Fe spins. It is also found that the exponent n in the external magnetic field Hext response of barrier height FB=FB0(1-Hext/H0) n is less than 2 in the low-temperature region below TR, whereas n approaches 2 when T >TR , indicating the presence of Stoner-Wohlfarth-type magnetization rotation. This reflects the fact that the magnetic anisotropy is mainly governed by the K1A term in the T >TR region.

  11. Markov chain Monte Carlo based analysis of post-translationally modified VDAC gating kinetics.

    Science.gov (United States)

    Tewari, Shivendra G; Zhou, Yifan; Otto, Bradley J; Dash, Ranjan K; Kwok, Wai-Meng; Beard, Daniel A

    2014-01-01

    The voltage-dependent anion channel (VDAC) is the main conduit for permeation of solutes (including nucleotides and metabolites) of up to 5 kDa across the mitochondrial outer membrane (MOM). Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs). Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated, and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC) method. This developed method describes three distinct conducting states (open, half-open, and closed) of VDAC activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggest that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.

  12. Variational Monte Carlo analysis of Bose-Einstein condensation in a two-dimensional trap

    Institute of Scientific and Technical Information of China (English)

    Zheng Rong-Jie; Jin Jing; Tang Yi

    2006-01-01

    The ground-state properties of a system with a small number of interacting bosons over a wide range of densities are investigated. The system is confined in a two-dimensional isotropic harmonic trap, where the interaction between bosons is treated as a hard-core potential. By using variational Monte Carlo method, we diagonalize the one-body density matrix of the system to obtain the ground-state energy, condensate wavefunction and the condensate fraction.We find that in the dilute limit the depletion of central condensate in the 2D system is larger than in a 3D system for the same interaction strength; however as the density increases, the depletion at the centre of 2D trap will be equal to or even lower than that at the centre of 3D trap, which is in agreement with the anticipated in Thomas-Fermi approximation. In addition, in the 2D system the total condensate depletion is still larger than in a 3D system for the same scattering length.

  13. Analysis of probabilistic short run marginal cost using Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez-Alcaraz, G.; Navarrete, N.; Tovar-Hernandez, J.H.; Fuerte-Esquivel, C.R. [Inst. Tecnologico de Morelia, Michoacan (Mexico). Dept. de Ing. Electrica y Electronica; Mota-Palomino, R. [Inst. Politecnico Nacional (Mexico). Escuela Superior de Ingenieria Mecanica y Electrica

    1999-11-01

    The structure of the Electricity Supply Industry is undergoing dramatic changes to provide new services options. The main aim of this restructuring is allowing generating units the freedom of selling electricity to anybody they wish at a price determined by market forces. Several methodologies have been proposed in order to quantify different costs associated with those new services offered by electrical utilities operating under a deregulated market. The new wave of pricing is heavily influenced by economic principles designed to price products to elastic market segments on the basis of marginal costs. Hence, spot pricing provides the economic structure for many of new services. At the same time, the pricing is influenced by uncertainties associated to the electric system state variables which defined its operating point. In this paper, nodal probabilistic short run marginal costs are calculated, considering as random variables the load, the production cost and availability of generators. The effect of the electrical network is evaluated taking into account linearized models. A thermal economic dispatch is used to simulate each operational condition generated by Monte Carlo method on small fictitious power system in order to assess the effect of the random variables on the energy trading. First, this is carry out by introducing each random variable one by one, and finally considering the random interaction of all of them.

  14. The Sherpa Maximum Likelihood Estimator

    Science.gov (United States)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  15. Analysis of Reduction in Area in MIMO Receivers Using SQRD Method and Unitary Transformation with Maximum Likelihood Estimation (MLE and Minimum Mean Square Error Estimation (MMSE Techniques

    Directory of Open Access Journals (Sweden)

    Sabitha Gauni

    2014-03-01

    Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.

  16. Analysis of the Sufficient Path Elimination Window for the Maximum-Likelihood Sequential-Search Decoding Algorithm for Binary Convolutional Codes

    CERN Document Server

    Shieh, Shin-Lin; Han, Yunghsiang S

    2007-01-01

    A common problem on sequential-type decoding is that at the signal-to-noise ratio (SNR) below the one corresponding to the cutoff rate, the average decoding complexity per information bit and the required stack size grow rapidly with the information length. In order to alleviate the problem in the maximum-likelihood sequential decoding algorithm (MLSDA), we propose to directly eliminate the top path whose end node is $\\Delta$-trellis-level prior to the farthest one among all nodes that have been expanded thus far by the sequential search. Following random coding argument, we analyze the early-elimination window $\\Delta$ that results in negligible performance degradation for the MLSDA. Our analytical results indicate that the required early elimination window for negligible performance degradation is just twice of the constraint length for rate one-half convolutional codes. For rate one-third convolutional codes, the required early-elimination window even reduces to the constraint length. The suggestive theore...

  17. Estimation for Non-Gaussian Locally Stationary Processes with Empirical Likelihood Method

    Directory of Open Access Journals (Sweden)

    Hiroaki Ogata

    2012-01-01

    Full Text Available An application of the empirical likelihood method to non-Gaussian locally stationary processes is presented. Based on the central limit theorem for locally stationary processes, we give the asymptotic distributions of the maximum empirical likelihood estimator and the empirical likelihood ratio statistics, respectively. It is shown that the empirical likelihood method enables us to make inferences on various important indices in a time series analysis. Furthermore, we give a numerical study and investigate a finite sample property.

  18. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  19. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    Science.gov (United States)

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  20. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    Science.gov (United States)

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  1. The Performance of the Full Information Maximum Likelihood Estimator in Multiple Regression Models with Missing Data.

    Science.gov (United States)

    Enders, Craig K.

    2001-01-01

    Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…

  2. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    Science.gov (United States)

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest.

  3. Monte Carlo analysis of the Neutron Standards Laboratory of the CIEMAT; Analisis Monte Carlo del Laboratorio de Patrones Neutronicos del CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Mendez V, R. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Av. Complutense 40, 28040 Madrid (Spain); Guzman G, K. A., E-mail: fermineutron@yahoo.com [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2014-10-15

    By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: {sup 241}AmBe and {sup 252}Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)

  4. Phylogenetic estimation with partial likelihood tensors

    CERN Document Server

    Sumner, J G

    2008-01-01

    We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach.

  5. Maximum Likelihood Factor Analysis of the Effects of Chronic Centrifugation on the Structural Development of the Musculoskeletal System of the Rat

    Science.gov (United States)

    Amtmann, E.; Kimura, T.; Oyama, J.; Doden, E.; Potulski, M.

    1979-01-01

    At the age of 30 days female Sprague-Dawley rats were placed on a 3.66 m radius centrifuge and subsequently exposed almost continuously for 810 days to either 2.76 or 4.15 G. An age-matched control group of rats was raised near the centrifuge facility at earth gravity. Three further control groups of rats were obtained from the animal colony and sacrificed at the age of 34, 72 and 102 days. A total of 16 variables were simultaneously factor analyzed by maximum-likelihood extraction routine and the factor loadings presented after-rotation to simple structure by a varimax rotation routine. The variables include the G-load, age, body mass, femoral length and cross-sectional area, inner and outer radii, density and strength at the mid-length of the femur, dry weight of gluteus medius, semimenbranosus and triceps surae muscles. Factor analyses on A) all controls, B) all controls and the 2.76 G group, and C) all controls and centrifuged animals, produced highly similar loading structures of three common factors which accounted for 74%, 68% and 68%. respectively, of the total variance. The 3 factors were interpreted as: 1. An age and size factor which stimulates the growth in length and diameter and increases the density and strength of the femur. This factor is positively correlated with G-load but is also active in the control animals living at earth gravity. 2. A growth inhibition factor which acts on body size, femoral length and on both the outer and inner radius at mid-length of the femur. This factor is intensified by centrifugation.

  6. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  7. Workshop on Likelihoods for the LHC Searches

    CERN Document Server

    2013-01-01

    The primary goal of this 3‐day workshop is to educate the LHC community about the scientific utility of likelihoods. We shall do so by describing and discussing several real‐world examples of the use of likelihoods, including a one‐day in‐depth examination of likelihoods in the Higgs boson studies by ATLAS and CMS.

  8. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence.

    Science.gov (United States)

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html].

  9. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence.

    Directory of Open Access Journals (Sweden)

    Chin Lin

    Full Text Available Conventional genome-wide association studies (GWAS have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA, which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN at https://cran.r-project.org/web/packages/etma/index.html].

  10. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    Science.gov (United States)

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve.

  11. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  12. Spray cooling simulation implementing time scale analysis and the Monte Carlo method

    Science.gov (United States)

    Kreitzer, Paul Joseph

    Spray cooling research is advancing the field of heat transfer and heat rejection in high power electronics. Smaller and more capable electronics packages are producing higher amounts of waste heat, along with smaller external surface areas, and the use of active cooling is becoming a necessity. Spray cooling has shown extremely high levels of heat rejection, of up to 1000 W/cm 2 using water. Simulations of spray cooling are becoming more realistic, but this comes at a price. A previous researcher has used CFD to successfully model a single 3D droplet impact into a liquid film using the level set method. However, the complicated multiphysics occurring during spray impingement and surface interactions increases computation time to more than 30 days. Parallel processing on a 32 processor system has reduced this time tremendously, but still requires more than a day. The present work uses experimental and computational results in addition to numerical correlations representing the physics occurring on a heated impingement surface. The current model represents the spray behavior of a Spraying Systems FullJet 1/8-g spray nozzle. Typical spray characteristics are indicated as follows: flow rate of 1.05x10-5 m3/s, normal droplet velocity of 12 m/s, droplet Sauter mean diameter of 48 microm, and heat flux values ranging from approximately 50--100 W/cm2 . This produces non-dimensional numbers of: We 300--1350, Re 750--3500, Oh 0.01--0.025. Numerical and experimental correlations have been identified representing crater formation, splashing, film thickness, droplet size, and spatial flux distributions. A combination of these methods has resulted in a Monte Carlo spray impingement simulation model capable of simulating hundreds of thousands of droplet impingements or approximately one millisecond. A random sequence of droplet impingement locations and diameters is generated, with the proper radial spatial distribution and diameter distribution. Hence the impingement, lifetime

  13. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D [Case Western Reserve Univ., Cleveland, OH (United States)

    2004-05-01

    of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  14. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    first use of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  15. The Dynamic Monte Carlo Method for Transient Analysis of Nuclear Reactors

    NARCIS (Netherlands)

    Sjenitzer, B.L.

    2013-01-01

    In this thesis a new method for the analysis of power transients in a nuclear reactor is developed, which is more accurate than the present state-of-the-art methods. Transient analysis is important tool when designing nuclear reactors, since they predict the behaviour of a reactor during changing co

  16. Monte Carlo analysis: error of extrapolated thermal conductivity from molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiang-Yang [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Andersson, Anders David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    In this short report, we give an analysis of the extrapolated thermal conductivity of UO2 from earlier molecular dynamics (MD) simulations [1]. Because almost all material properties are functions of temperature, e.g. fission gas release, the fuel thermal conductivity is the most important parameter from a model sensitivity perspective [2]. Thus, it is useful to perform such analysis.

  17. Monte Carlo Error Analysis Applied to Core Formation: The Single-stage Model Revived

    Science.gov (United States)

    Cottrell, E.; Walter, M. J.

    2009-12-01

    The last decade has witnessed an explosion of studies that scrutinize whether or not the siderophile element budget of the modern mantle can plausibly be explained by metal-silicate equilibration in a deep magma ocean during core formation. The single-stage equilibrium scenario is seductive because experiments that equilibrate metal and silicate can then serve as a proxy for the early earth, and the physical and chemical conditions of core formation can be identified. Recently, models have become more complex as they try to accommodate the proliferation of element partitioning data sets, each of which sets its own limits on the pressure, temperature, and chemistry of equilibration. The ability of single stage models to explain mantle chemistry has subsequently been challenged, resulting in the development of complex multi-stage core formation models. Here we show that the extent to which extant partitioning data are consistent with single-stage core formation depends heavily upon (1) the assumptions made when regressing experimental partitioning data (2) the certainty with which regression coefficients are known and (3) the certainty with which the core/mantle concentration ratios of the siderophile elements are known. We introduce a Monte Carlo algorithm coded in MATLAB that samples parameter space in pressure and oxygen fugacity for a given mantle composition (nbo/t) and liquidus, and returns the number of equilibrium single-stage liquidus “solutions” that are permissible, taking into account the uncertainty in regression parameters and range of acceptable core/mantle ratios. Here we explore the consequences of regression parameter uncertainty and the impact of regression construction on model outcomes. We find that the form of the partition coefficient (Kd with enforced valence state, or D) and the handling of the temperature effect (based on 1-atm free energy data or high P-T experimental observations) critically affects model outcomes. We consider the most

  18. Risk Assessment and Prediction of Flyrock Distance by Combined Multiple Regression Analysis and Monte Carlo Simulation of Quarry Blasting

    Science.gov (United States)

    Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh

    2016-09-01

    Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.

  19. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation.

    Science.gov (United States)

    Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent

    2015-09-14

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  20. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    Science.gov (United States)

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  1. In Depth Proteome Analysis of Ripening Muscadine Grape Berry cv. Carlos Reveals Proteins Associated with Flavor and Aroma Compounds.

    Science.gov (United States)

    Kambiranda, Devaiah; Basha, Sheikh M; Singh, Rakesh K; He, Huan; Calvin, Kate; Mercer, Roger

    2016-09-02

    Ripening in nonclimacteric fruits such as grape involves complex chemical changes that have a profound influence on the accumulation of flavor and aroma compounds distinct to a particular grape genotype. In this study, proteome characterization of wine type bronze muscadine grape (Vitis rotundifolia cv. Carlos), primarily grown in the Southeastern United States was performed during berry ripening. Stage-specific protein expression was obtained among different stages of berries. Differential analysis showed the expression of 522 proteins that regulate diverse biological processes and metabolic pathways. Of these, 30 proteins are associated with the production of key phenolic compounds, whereas 25 are associated with the production of muscadine aroma compounds. These proteins are involved in the phenylpropanoid pathway, terpene synthesis, fatty acid derived volatiles and esters that affect muscadine berry flavor and aroma characteristics. Further, gene expression analysis during ripening validated the expression pattern of 12 proteins. Catechin, epicatechin, and four stilbenes were quantified to correlate observed proteome changes. This study not only revealed biochemical changes during muscadine berry ripening but also offers indicators for marker-assisted breeding to enhance organoleptic properties of muscadine grape to improve its flavor and aroma properties.

  2. Microscopic structure and interaction analysis for supercritical carbon dioxide-ethanol mixtures: a Monte Carlo simulation study.

    Science.gov (United States)

    Xu, Wenhao; Yang, Jichu; Hu, Yinyu

    2009-04-01

    Configurational-bias Monte Carlo simulations in the isobaric-isothermal ensemble using the TraPPE-UA force field were performed to study the microscopic structures and molecular interactions of mixtures containing supercritical carbon dioxide (scCO(2)) and ethanol (EtOH). The binary vapor-liquid coexisting curves were calculated at 298.17, 333.2, and 353.2 K and are in excellent agreement with experimental results. For the first time, three important interactions, i.e., EtOH-EtOH hydrogen bonding, EtOH-CO(2) hydrogen bonding, and EtOH-CO(2) electron donor-acceptor (EDA) bonding, in the mixtures were fully analyzed and compared. The EtOH mole fraction, temperature, and pressure effect on the three interactions was investigated and then explained by the competition of interactions between EtOH and CO(2) molecules. Analysis of the microscopic structures indicates a strong preference for the formation of EtOH-CO(2) hydrogen-bonded tetramers and pentamers at higher EtOH compositions. The distribution of aggregation sizes and types shows that a very large EtOH-EtOH hydrogen-bonded network exists in the mixtures, while only linear EtOH-CO(2) hydrogen-bonded and EDA-bonded dimers and trimers are present. Further analysis shows that EtOH-CO(2) EDA complex is more stable than the hydrogen-bonded one.

  3. Multiplicative earthquake likelihood models incorporating strain rates

    Science.gov (United States)

    Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.

    2017-01-01

    SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake likelihood models. We derive a set of multiplicative hybrid earthquake likelihood models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.

  4. LIKEDM: Likelihood calculator of dark matter detection

    Science.gov (United States)

    Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang

    2017-04-01

    With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

  5. Modelling of dissolved oxygen in the Danube River using artificial neural networks and Monte Carlo Simulation uncertainty analysis

    Science.gov (United States)

    Antanasijević, Davor; Pocajt, Viktor; Perić-Grujić, Aleksandra; Ristić, Mirjana

    2014-11-01

    This paper describes the training, validation, testing and uncertainty analysis of general regression neural network (GRNN) models for the forecasting of dissolved oxygen (DO) in the Danube River. The main objectives of this work were to determine the optimum data normalization and input selection techniques, the determination of the relative importance of uncertainty in different input variables, as well as the uncertainty analysis of model results using the Monte Carlo Simulation (MCS) technique. Min-max, median, z-score, sigmoid and tanh were validated as normalization techniques, whilst the variance inflation factor, correlation analysis and genetic algorithm were tested as input selection techniques. As inputs, the GRNN models used 19 water quality variables, measured in the river water each month at 17 different sites over a period of 9 years. The best results were obtained using min-max normalized data and the input selection based on the correlation between DO and dependent variables, which provided the most accurate GRNN model, and in combination the smallest number of inputs: Temperature, pH, HCO3-, SO42-, NO3-N, Hardness, Na, Cl-, Conductivity and Alkalinity. The results show that the correlation coefficient between measured and predicted DO values is 0.85. The inputs with the greatest effect on the GRNN model (arranged in descending order) were T, pH, HCO3-, SO42- and NO3-N. Of all inputs, variability of temperature had the greatest influence on the variability of DO content in river body, with the DO decreasing at a rate similar to the theoretical DO decreasing rate relating to temperature. The uncertainty analysis of the model results demonstrate that the GRNN can effectively forecast the DO content, since the distribution of model results are very similar to the corresponding distribution of real data.

  6. Appraisal of Airport Alternatives in Greenland by the use of Risk Analysis and Monte Carlo Simulation

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents an appraisal study of three different airport proposals in Greenland by the use of an adapted version of the Danish CBA-DK model. The assessment model is based on both a deterministic calculation by the use of conventional cost-benefit analysis and a stochastic calculation...

  7. Evaluation of CANDU6 PCR (power coefficient of reactivity) with a 3-D whole-core Monte Carlo Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, Mohammad Abdul; Kim, Woosong; Kim, Yonghee, E-mail: yongheekim@kaist.ac.kr

    2015-12-15

    Highlights: • The PCR of the CANDU6 reactor is slightly negative at low power, e.g. <80% P. • Doppler broadening of scattering resonances improves noticeably the FTC and make the PCR more negative or less positive in CANDU6. • The elevated inlet coolant condition can worsen significantly the PCR of CANDU6. • Improved design tools are needed for the safety evaluation of CANDU6 reactor. - Abstract: The power coefficient of reactivity (PCR) is a very important parameter for inherent safety and stability of nuclear reactors. The combined effect of a relatively less negative fuel temperature coefficient and a positive coolant temperature coefficient make the CANDU6 (CANada Deuterium Uranium) PCR very close to zero. In the original CANDU6 design, the PCR was calculated to be clearly negative. However, the latest physics design tools predict that the PCR is slightly positive for a wide operational range of reactor power. It is upon this contradictory observation that the CANDU6 PCR is re-evaluated in this work. In our previous study, the CANDU6 PCR was evaluated through a standard lattice analysis at mid-burnup and was found to be negative at low power. In this paper, the study was extended to a detailed 3-D CANDU6 whole-core model using the Monte Carlo code Serpent2. The Doppler broadening rejection correction (DBRC) method was implemented in the Serpent2 code in order to take into account thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. Time-average equilibrium core was considered for the evaluation of the representative PCR of CANDU6. Two thermal hydraulic models were considered in this work: one at design condition and the other at operating condition. Bundle-wise distributions of the coolant properties are modeled and the bundle-wise fuel temperature is also considered in this study. The evaluated nuclear data library ENDF/B-VII.0 was used throughout this Serpent2 evaluation. In these Monte Carlo calculations, a large number

  8. Monte Carlo simulation of x-ray fluorescence analysis of gold in kidney using 99mTc radiopharmaceutical

    Science.gov (United States)

    Mahdavi, Naser; Shamsaei, Mojtaba; Shafaei, Mostafa; Rabiei, Ali

    2013-10-01

    The objective of this study was to design a system in order to analyze gold and other heavy elements in internal organs using in vivo x-ray fluorescence (XRF) analysis. Monte Carlo N Particle code MCNP was used to simulate phantoms and sources. A source of 99mTc was simulated in kidney to excite the gold x-rays. Changes in K XRF response due to variations in tissue thickness overlying the kidney at the measurement site were investigated. Different simulations having tissue thicknesses of 20, 30, 40, 50 and 60 mm were performed. Kα1 and Kα2 for all depths were measured. The linearity of the XRF system was also studied by increasing the gold concentration in the kidney phantom from 0 to 500 µg g-1 kidney tissue. The results show that gold concentration between 3 and 10 µg g-1 kidney tissue can be detected for distance between the skin and the kidney surface of 20-60 mm. The study also made a comparison between the skin doses for the source outside and inside the phantom.

  9. Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode

    Energy Technology Data Exchange (ETDEWEB)

    Olivero, P. [Experimental Physics Dept./NIS Excellence Centre, University of Torino, and INFN-Sez. di Torino via P. Giuria 1, 10125 Torino (Italy); Ruder Boskovic Institute, Bijenicka 54, P.O. Box 180, 10002 Zagreb (Croatia); Forneris, J.; Gamarra, P. [Experimental Physics Dept./NIS Excellence Centre, University of Torino, and INFN-Sez. di Torino via P. Giuria 1, 10125 Torino (Italy); Jaksic, M. [Ruder Boskovic Institute, Bijenicka 54, P.O. Box 180, 10002 Zagreb (Croatia); Lo Giudice, A.; Manfredotti, C. [Experimental Physics Dept./NIS Excellence Centre, University of Torino, and INFN-Sez. di Torino via P. Giuria 1, 10125 Torino (Italy); Pastuovic, Z.; Skukan, N. [Ruder Boskovic Institute, Bijenicka 54, P.O. Box 180, 10002 Zagreb (Croatia); Vittone, E., E-mail: ettore.vittone@unito.it [Experimental Physics Dept./NIS Excellence Centre, University of Torino, and INFN-Sez. di Torino via P. Giuria 1, 10125 Torino (Italy)

    2011-10-15

    The transport properties of a 4H-SiC Schottky diode have been investigated by the ion beam induced charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 {mu}m. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combination of these two computational methods enabled an exhaustive interpretation of the experimental profiles and allowed an accurate evaluation both of the electrical characteristics of the active region (e.g. electric field profiles) and of basic transport parameters (i.e. diffusion length and minority carrier lifetime).

  10. Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode

    Science.gov (United States)

    Olivero, P.; Forneris, J.; Gamarra, P.; Jakšić, M.; Giudice, A. Lo; Manfredotti, C.; Pastuović, Ž.; Skukan, N.; Vittone, E.

    2011-10-01

    The transport properties of a 4H-SiC Schottky diode have been investigated by the ion beam induced charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 μm. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combination of these two computational methods enabled an exhaustive interpretation of the experimental profiles and allowed an accurate evaluation both of the electrical characteristics of the active region (e.g. electric field profiles) and of basic transport parameters (i.e. diffusion length and minority carrier lifetime).

  11. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners.

    Science.gov (United States)

    Sweeney, Lisa M; Parker, Ann; Haber, Lynne T; Tran, C Lang; Kuempel, Eileen D

    2013-06-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model.

  12. Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode

    CERN Document Server

    Olivero, P; Gamarra, P; Jaksic, M; Giudice, A Lo; Manfredotti, C; Pastuovic, Z; Skukan, N; Vittone, E

    2016-01-01

    The transport properties of a 4H-SiC Schottky diode have been investigated by the Ion Beam Induced Charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 um. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combina...

  13. Monte Carlo Analysis of Airport Throughput and Traffic Delays Using Self Separation Procedures

    Science.gov (United States)

    Consiglio, Maria C.; Sturdy, James L.

    2006-01-01

    This paper presents the results of three simulation studies of throughput and delay times of arrival and departure operations performed at non-towered, non-radar airports using self-separation procedures. The studies were conducted as part of the validation process of the Small Aircraft Transportation Systems Higher Volume Operations (SATS HVO) concept and include an analysis of the predicted airport capacity using with different traffic conditions and system constraints under increasing levels of demand. Results show that SATS HVO procedures can dramatically increase capacity at non-towered, non-radar airports and that the concept offers the potential for increasing capacity of the overall air transportation system.

  14. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  15. Composite analysis with Monte Carlo methods: an example with cosmic rays and clouds

    CERN Document Server

    Laken, Benjamin A

    2013-01-01

    The composite (superposed epoch) analysis technique has been frequently employed to examine a hypothesized link between solar activity and the Earth's atmosphere, often through an investigation of Forbush decrease (Fd) events (sudden high-magnitude decreases in the flux cosmic rays impinging on the upper-atmosphere lasting up to several days). This technique is useful for isolating low-amplitude signals within data where background variability would otherwise obscure detection. The application of composite analyses to investigate the possible impacts of Fd events involves a statistical examination of time-dependent atmospheric responses to Fds often from aerosol and/or cloud datasets. Despite the publication of numerous results within this field, clear conclusions have yet to be drawn and much ambiguity and disagreement still remain. In this paper, we argue that the conflicting findings of composite studies within this field relate to methodological differences in the manner in which the composites have been ...

  16. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    Science.gov (United States)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model

  17. Parallel tempering Monte Carlo combined with clustering Euclidean metric analysis to study the thermodynamic stability of Lennard-Jones nanoclusters

    Science.gov (United States)

    Cezar, Henrique M.; Rondina, Gustavo G.; Da Silva, Juarez L. F.

    2017-02-01

    A basic requirement for an atom-level understanding of nanoclusters is the knowledge of their atomic structure. This understanding is incomplete if it does not take into account temperature effects, which play a crucial role in phase transitions and changes in the overall stability of the particles. Finite size particles present intricate potential energy surfaces, and rigorous descriptions of temperature effects are best achieved by exploiting extended ensemble algorithms, such as the Parallel Tempering Monte Carlo (PTMC). In this study, we employed the PTMC algorithm, implemented from scratch, to sample configurations of LJn (n =38 , 55, 98, 147) particles at a wide range of temperatures. The heat capacities and phase transitions obtained with our PTMC implementation are consistent with all the expected features for the LJ nanoclusters, e.g., solid to solid and solid to liquid. To identify the known phase transitions and assess the prevalence of various structural motifs available at different temperatures, we propose a combination of a Leader-like clustering algorithm based on a Euclidean metric with the PTMC sampling. This combined approach is further compared with the more computationally demanding bond order analysis, typically employed for this kind of problem. We show that the clustering technique yields the same results in most cases, with the advantage that it requires no previous knowledge of the parameters defining each geometry. Being simple to implement, we believe that this straightforward clustering approach is a valuable data analysis tool that can provide insights into the physics of finite size particles with few to thousand atoms at a relatively low cost.

  18. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  19. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  20. Composite analysis with Monte Carlo methods: an example with cosmic rays and clouds

    Directory of Open Access Journals (Sweden)

    Laken B.A.

    2013-09-01

    Full Text Available The composite (superposed epoch analysis technique has been frequently employed to examine a hypothesized link between solar activity and the Earth’s atmosphere, often through an investigation of Forbush decrease (Fd events (sudden high-magnitude decreases in the flux cosmic rays impinging on the upper-atmosphere lasting up to several days. This technique is useful for isolating low-amplitude signals within data where background variability would otherwise obscure detection. The application of composite analyses to investigate the possible impacts of Fd events involves a statistical examination of time-dependent atmospheric responses to Fds often from aerosol and/or cloud datasets. Despite the publication of numerous results within this field, clear conclusions have yet to be drawn and much ambiguity and disagreement still remain. In this paper, we argue that the conflicting findings of composite studies within this field relate to methodological differences in the manner in which the composites have been constructed and analyzed. Working from an example, we show how a composite may be objectively constructed to maximize signal detection, robustly identify statistical significance, and quantify the lower-limit uncertainty related to hypothesis testing. Additionally, we also demonstrate how a seemingly significant false positive may be obtained from non-significant data by minor alterations to methodological approaches.

  1. Acoustic effects analysis utilizing speckle pattern with fixed-particle Monte Carlo

    Science.gov (United States)

    Vakili, Ali; Hollmann, Joseph A.; Holt, R. Glynn; DiMarzio, Charles A.

    2016-03-01

    Optical imaging in a turbid medium is limited because of multiple scattering a photon undergoes while traveling through the medium. Therefore, optical imaging is unable to provide high resolution information deep in the medium. In the case of soft tissue, acoustic waves unlike light, can travel through the medium with negligible scattering. However, acoustic waves cannot provide medically relevant contrast as good as light. Hybrid solutions have been applied to use the benefits of both imaging methods. A focused acoustic wave generates a force inside an acoustically absorbing medium known as acoustic radiation force (ARF). ARF induces particle displacement within the medium. The amount of displacement is a function of mechanical properties of the medium and the applied force. To monitor the displacement induced by the ARF, speckle pattern analysis can be used. The speckle pattern is the result of interfering optical waves with different phases. As light travels through the medium, it undergoes several scattering events. Hence, it generates different scattering paths which depends on the location of the particles. Light waves that travel along these paths have different phases (different optical path lengths). ARF induces displacement to scatterers within the acoustic focal volume, and changes the optical path length. In addition, temperature rise due to conversion of absorbed acoustic energy to heat, changes the index of refraction and therefore, changes the optical path length of the scattering paths. The result is a change in the speckle pattern. Results suggest that the average change in the speckle pattern measures the displacement of particles and temperature rise within the acoustic wave focal area, hence can provide mechanical and thermal properties of the medium.

  2. CosmoPMC: Cosmology Population Monte Carlo

    CERN Document Server

    Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren

    2011-01-01

    We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.

  3. Markov chain Monte Carlo: an introduction for epidemiologists.

    Science.gov (United States)

    Hamra, Ghassan; MacLehose, Richard; Richardson, David

    2013-04-01

    Markov Chain Monte Carlo (MCMC) methods are increasingly popular among epidemiologists. The reason for this may in part be that MCMC offers an appealing approach to handling some difficult types of analyses. Additionally, MCMC methods are those most commonly used for Bayesian analysis. However, epidemiologists are still largely unfamiliar with MCMC. They may lack familiarity either with he implementation of MCMC or with interpretation of the resultant output. As with tutorials outlining the calculus behind maximum likelihood in previous decades, a simple description of the machinery of MCMC is needed. We provide an introduction to conducting analyses with MCMC, and show that, given the same data and under certain model specifications, the results of an MCMC simulation match those of methods based on standard maximum-likelihood estimation (MLE). In addition, we highlight examples of instances in which MCMC approaches to data analysis provide a clear advantage over MLE. We hope that this brief tutorial will encourage epidemiologists to consider MCMC approaches as part of their analytic tool-kit.

  4. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...

  5. Identifiability Analysis on Transient Parameters of Electrical Power Components With Profile Likelihood%基于似然剖面的元件模型暂态参数可辨识性分析

    Institute of Scientific and Technical Information of China (English)

    陈润泽; 孙宏斌; 吴文传; 张伯明

    2015-01-01

    This paper presented a method of identifiability analysis on transient parameters of electrical power components based on profile likelihood. Identifiability is the premise of parameter identification. Identifiability analysis can help choose data for parameter identification, and find out the parameters that can be effectively identified. The identifiability of parameter is determined by the structure of the models, as well as the severity of the disturbance and the quality of the accuracy of measurements, corresponding to the theoretical and practical identifiability respectively. Considering these two aspects, this paper took advantage of the method of obtaining confidence interval by exploiting profile likelihood to analysis the identifiability of parameters. By solving a series of optimization problem, confidence intervals of each parameter of the models can be established, and the identifiability indices can be derived, based which we can judge the theoretical identifiability and tell how well a parameter can be practically identified with ease.%提出一种基于似然剖面的电力系统元件暂态模型参数可辨识性方法。可辨识性是参数辨识的前提,可辨识性分析有助于在参数辨识过程中选取合适的扰动数据,并确定可以被有效辨识的参数。参数的可辨识性由模型本身的结构特点和用于辨识的实际扰动深度、实测数据噪声大小共同决定,前者对应理论可辨识性,后者对应实际可辨识性。综合考虑这两种可辨识性,引入统计学中利用似然剖面构造置信区间的方法来进行参数可辨识性分析,通过求解一系列优化问题,得到给定模型中每个参数的置信区间和可辨识性指标。基于此,不仅可以判断参数是否具备理论可辨识性,还能直观地反映各参数的实际可辨识性。

  6. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  7. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  8. Boltzmann equation analysis and Monte Carlo simulation of electron transport in N2-O2 streamer discharge

    NARCIS (Netherlands)

    Dujko, S.; Ebert, U.; White, R.D.; Petrović, Z.L.

    2010-01-01

    A comprehensive investigation of electron transport in N$_{2}$-O$_{2}$ mixtures has been carried out using a multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique instead of conventional two-term theory often employed in plasma modeling community. We focus on the

  9. Monte Carlo uncertainty analysis of germanium detector response to gamma-rays with energies below 1 MeV

    NARCIS (Netherlands)

    Maleka, PP; Maucec, M

    2005-01-01

    Monte Carlo method was used to simulate the pulse-height response function of high-precision germanium (HPGe) detector for photon energies below 1 MeV. The calculations address the uncertainty estimation due to inadequate specifications of source positioning and to variations in the detector's physi

  10. Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model

    Science.gov (United States)

    Owens, Corina M.

    2011-01-01

    Numerous ways to meta-analyze single-case data have been proposed in the literature, however, consensus on the most appropriate method has not been reached. One method that has been proposed involves multilevel modeling. This study used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw data multilevel…

  11. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation......This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  12. Likelihood Analysis of the Local Group Acceleration

    CERN Document Server

    Schmoldt, I M; Teodoro, L; Efstathiou, G P; Frenk, C S; Keeble, O; Maddox, S J; Oliver, S; Rowan-Robinson, M; Saunders, W J; Sutherland, W; Tadros, H; White, S D M

    1999-01-01

    We compute the acceleration on the Local Group using 11206 IRAS galaxies from the recently completed all-sky PSCz redshift survey. Measuring the acceleration vector in redshift space generates systematic uncertainties due to the redshift space distortions in the density field. We therefore assign galaxies to their real space positions by adopting a non-parametric model for the velocity field that solely relies on the linear gravitational instability and linear biasing hypotheses. Remaining systematic contributions to the measured acceleration vector are corrected for by using PSCz mock catalogues from N-body experiments. The resulting acceleration vector points approx. 15 degrees away from the CMB dipole apex, with a remarkable alignment between small and large scale contributions. A considerable fraction of the measured acceleration is generated within 40 h-1 Mpc with a non-negligible contribution from scales between 90 and 140 h-1 Mpc after which the acceleration amplitude seems to have converged. The local...

  13. Measurement of absolute concentrations of individual compounds in metabolite mixtures by gradient-selective time-zero 1H-13C HSQC with two concentration references and fast maximum likelihood reconstruction analysis.

    Science.gov (United States)

    Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L

    2011-12-15

    Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.

  14. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, Wies M.W.

    1994-01-01

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to

  15. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, W.

    1998-01-01

    In order to obtain conditional maximum likelihood estimates, the conditioning constants are needed. Geyer and Thompson (1992) proposed a Markov chain Monte Carlo method that can be used to approximate these constants when they are difficult to calculate exactly. In the present paper, their method is

  16. Extended Ensemble Monte Carlo

    OpenAIRE

    Iba, Yukito

    2000-01-01

    ``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...

  17. 黑龙江省国有林区职工家庭生计资本的实证分析%The Empirical Analysis of Worker Families' Likelihood Capital in the State-owned Forest Region of Heilongjiang Province

    Institute of Scientific and Technical Information of China (English)

    韩竺君; 冯孟诗; 和千琪

    2015-01-01

    According to the analysis of the 241 samples data from the database of livelihood survey of key national forest areas in 2013, the paper has found the characteristics of worker families' likelihood capital in the state-owned forest region of Heilongjiang province. This study proves that:adult labors in worker family have heavy burden to support their elderly kin; most workers are highly educated; worker households' income level is not high while their household income comes mainly from salary income with less other sources of income;worker families' social capital is insufficient.%运用2013年重点国有林区民生调查数据库中241个样本数据,通过描述性分析得出黑龙江省国有林区职工家庭生计资本的特征.研究结果表明:职工家庭成年劳动力赡养负担重;职工受教育程度较高;职工家庭收入主要来源于工资性收入,其他收入来源渠道少,家庭收入水平不高;职工家庭社会资本不强.

  18. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  19. Introductory statistical inference with the likelihood function

    CERN Document Server

    Rohde, Charles A

    2014-01-01

    This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University’s Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians.  After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with secti...

  20. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  1. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  2. Analysis of the dead layer of a detector of germanium with code ultrapure Monte Carlo SWORD-GEANT; Analisis del dead layer de un detector de germanio ultrapuro con el codigo de Monte Carlo SWORDS-GEANT

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo, S.; Querol, A.; Ortiz, J.; Rodenas, J.; Verdu, G.

    2014-07-01

    In this paper the use of Monte Carlo code SWORD-GEANT is proposed to simulate an ultra pure germanium detector High Purity Germanium detector (HPGe) detector ORTEC specifically GMX40P4, coaxial geometry. (Author)

  3. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    Science.gov (United States)

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  4. Forecasting New Product Sales from Likelihood of Purchase Ratings

    OpenAIRE

    William J. Infosino

    1986-01-01

    This paper compares consumer likelihood of purchase ratings for a proposed new product to their actual purchase behavior after the product was introduced. The ratings were obtained from a mail survey a few weeks before the product was introduced. The analysis leads to a model for forecasting new product sales. The model is supported by both empirical evidence and a reasonable theoretical foundation. In addition to calibrating the relationship between questionnaire ratings and actual purchases...

  5. A quantum framework for likelihood ratios

    CERN Document Server

    Bond, Rachael L; Ormerod, Thomas C

    2015-01-01

    The ability to calculate precise likelihood ratios is fundamental to many STEM areas, such as decision-making theory, biomedical science, and engineering. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. Equally, the use of alternative statistical approaches, such as multivariate logistic regression, may be confounded by other axiomatic conditions, e.g., low levels of co-linearity. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement. In doing so, it is argued that this quantum approach demonstrates: that the likelihood ratio is a real quality of statistical systems; that the naive Bayes' classifier is a spec...

  6. Likelihood alarm displays. [for human operator

    Science.gov (United States)

    Sorkin, Robert D.; Kantowitz, Barry H.; Kantowitz, Susan C.

    1988-01-01

    In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.

  7. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results......To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  8. Probabilistic uncertainty analysis based on Monte Carlo simulations of co-combustion of hazelnut hull and coal blends: Data-driven modeling and response surface optimization.

    Science.gov (United States)

    Buyukada, Musa

    2017-02-01

    The aim of present study is to investigate the thermogravimetric behaviour of the co-combustion of hazelnut hull (HH) and coal blends using three approaches: multi non-linear regression (MNLR) modeling based on Box-Behnken design (BBD) (1), optimization based on response surface methodology (RSM) (2), and probabilistic uncertainty analysis based on Monte Carlo simulation as a function of blend ratio, heating rate, and temperature (3). The response variable was predicted by the best-fit MNLR model with a predicted regression coefficient (R(2)pred) of 99.5%. Blend ratio of 90/10 (HH to coal, %wt), temperature of 405°C, and heating rate of 44°Cmin(-1) were determined as RSM-optimized conditions with a mass loss of 87.4%. The validation experiments with three replications were performed for justifying the predicted-mass loss percentage and 87.5%±0.2 of mass loss were obtained under RSM-optimized conditions. The probabilistic uncertainty analysis were performed by using Monte Carlo simulations.

  9. Monte Carlo simulation and Boltzmann equation analysis of non-conservative positron transport in H{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Bankovic, A., E-mail: ana.bankovic@gmail.com [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Dujko, S. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Centrum Wiskunde and Informatica (CWI), P.O. Box 94079, 1090 GB Amsterdam (Netherlands); ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); White, R.D. [ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); Buckman, S.J. [ARC Centre for Antimatter-Matter Studies, Australian National University, Canberra, ACT 0200 (Australia); Petrovic, Z.Lj. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia)

    2012-05-15

    This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n{sub 0} are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n{sub 0} the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H{sub 2} under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.

  10. Analysis of genetic relationship with ITO method using selfdeveloped likelihood ratio calculator%采用自主研发似然比率计算器进行ITO亲缘关系分析

    Institute of Scientific and Technical Information of China (English)

    周密; 张韩秋; 韦帆; 丁仁杰; 王成权

    2011-01-01

    目的 探讨采用自主研发的似然比率计算器进行ITO法亲缘关系鉴定的可行性.方法 针对GoldeneyeTM DNA ID System 20A系统的19个常染色体STR基因座分型数据,使用自主研发的似然比率计算器,人工模拟100 000对无关个体及100 000个家系的父子、同胞、半同胞、一、二代堂表兄弟,应用ITO法,自动分析计算亲权指数(PI)和亲缘关系概率(RCP).结果 父子和无关个体关系概率界值下无重叠,具有显著差异.同胞关系概率大于99.99%时,同胞占66.48%,无关个体占0%,具有显著差异;概率小于99.99%时,同胞个体和无关个体有部分重叠,具有一定差异.半同胞关系概率大于99.99%时,半同胞占1.52%,无关个体占0%,具有显著差异;概率小于99.99%时,两组有部分重叠,具有一定差异.一、二代堂表兄弟关系概率为10% ~ 90%范围内时,无关个体分别占90%,99.98%,无法推断.结论 本文方法可用于推断父子关系、同胞关系、半同胞关系.%Objective To evaluate the feasibility of genetic relationship analysis with ITO method using selfdeveloped likelihood ratio (LR) calculator. Methods For the typing data of 19 autosomal STR loci with the Goldeneye? DNA ID System 20A, the paternity index (PI) and relative chance of paternity (RCP) of 100 000 pairs of unrelated individuals, parent-child, full-sibling, halfsibling, first cousin, and second cousin from manual simulation were automatically analyzed with ITO method using selfdeveloped likelihood ratio (LR) calculator. Results Parent-child and unrelated individual never overlapped at all the range of parent-child probability ( RCPpc ) , and the difference was significant. Full-sibling probability ( RCPre ) of 66. 48% full-sibling were more than 99.99% while RCPR of unrelated individual pairs were 0%. Halfsibling probability ( RCP^) of 1. 52% halfsibling were more than 99. 99% while RCP^ of unrelated individual pairs were0%. In addition

  11. Monte Carlo methods

    OpenAIRE

    Bardenet, R.

    2012-01-01

    ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...

  12. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  13. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  14. Likelihood based testing for no fractional cointegration

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    We consider two likelihood ratio tests, so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen's procedure to the fractional cointegration case. The s...

  15. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...

  16. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  17. Smoothed log-concave maximum likelihood estimation with applications

    CERN Document Server

    Chen, Yining

    2011-01-01

    We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.

  18. Maximum likelihood method and Fisher's information in physics and econophysics

    CERN Document Server

    Syska, Jacek

    2012-01-01

    Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...

  19. Diagrammatic Monte Carlo approach for diagrammatic extensions of dynamical mean-field theory -- convergence analysis of the dual fermion technique

    CERN Document Server

    Gukelberger, Jan; Hafermann, Hartmut

    2016-01-01

    The dual-fermion approach provides a formally exact prescription for calculating properties of a correlated electron system in terms of a diagrammatic expansion around dynamical mean-field theory (DMFT). It can address the full range of interactions, the lowest order theory is asymptotically exact in both the weak- and strong-coupling limits, and the technique naturally incorporates long-range correlations beyond the reach of current cluster extensions to DMFT. Most practical implementations, however, neglect higher-order interaction vertices beyond two-particle scattering in the dual effective action and further truncate the diagrammatic expansion in the two-particle scattering vertex to a leading-order or ladder-type approximation. In this work we compute the dual-fermion expansion for the Hubbard model including all diagram topologies with two-particle interactions to high orders by means of a stochastic diagrammatic Monte Carlo algorithm. We use benchmarking against numerically exact Diagrammatic Determin...

  20. Induced radioactivity analysis for the NSRL Linac in China using Monte Carlo simulations and gamma-spectroscopy

    CERN Document Server

    He, Lijuan; Li, Weimin; Chen, Zhi; Chen, Yukai; Ren, Guangyi

    2014-01-01

    The 200-MeV electron linac of the National Synchrotron Radiation Laboratory (NSRL) located in Hefei is one of the earliest high-energy electron linear accelerators in China. The electrons are accelerated to 200 MeV by five acceleration tubes and are collimated by scrapers. The scraper aperture is smaller than the acceleration tube one, so some electrons hit the materials when passing through them. These lost electrons cause induced radioactivity mainly due to bremsstrahlung and photonuclear reaction. This paper describes a study of induced radioactivity for the NSRL Linac using FLUKA simulations and gamma-spectroscopy. The measurements showed that electrons were lost mainly at the scraper. So the induced radioactivity of the NSRL Linac is mainly produced here. The radionuclide types were simulated using the FLUKA Monte Carlo code and the results were compared against measurements made with a High Purity Germanium (HPGe) gamma spectrometer. The NSRL linac had been retired because of upgrading last year. The re...

  1. An analysis of exposure dose on hands of radiation workers using a Monte Carlo simulation in nuclear medicine

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Dong Gun [Dept. of Nuclear Medicine, Dongnam Institute of Radiological and Medical Sciences Cancer Center, Pusan (Korea, Republic of); Kang, SeSik; Kim, Jung Hoon; KIm, Chang Soo [Dept. of Radiological Science, College of Health Sciences, Catholic University, Pusan (Korea, Republic of)

    2015-12-15

    Workers in nuclear medicine have performed various tasks such as production, distribution, preparation and injection of radioisotope. This process could cause high radiation exposure to workers’ hand. The purpose of this study was to investigate shielding effect for r-rays of 140 and 511 keV by using Monte-Carlo simulation. As a result, it was effective, regardless of lead thickness for radiation shielding in 140 keV r-ray. However, it was effective in shielding material with thickness of more than only 1.1 mm in 511 keV r-ray. And also it doesn’t effective in less than 1.1 mm due to secondary scatter ray and exposure dose was rather increased. Consequently, energy of radionuclide and thickness of shielding materials should be considered to reduce radiation exposure.

  2. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    Science.gov (United States)

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks.

  3. RELM (the Working Group for the Development of Region Earthquake Likelihood Models) and the Development of new, Open-Source, Java-Based (Object Oriented) Code for Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Field, E. H.

    2001-12-01

    Given problems with virtually all previous earthquake-forecast models for southern California, and a current lack of consensus on how such models should be constructed, a joint SCEC-USGS sponsored working group for the development of Regional Earthquake Likelihood Models (RELM) has been established (www.relm.org). The goals are as follows: 1) To develop and test a range of viable earthquake-potential models for southern California (not just one "consensus" model); 2) To examine and compare the implications of each model with respect to probabilistic seismic-hazard estimates (which will not only quantify existing hazard uncertainties, but will also indicate how future research should be focused in order to reduce the uncertainties); and 3) To design and document conclusive tests of each model with respect to existing and future geophysical observations. The variety of models under development reflects the variety of geophysical constraints available; these include geological fault information, historical seismicity, geodetic observations, stress-transfer interactions, and foreshock/aftershock statistics. One reason for developing and testing a range of models is to evaluate the extent to which any one can be exported to another region where the options are more limited. RELM is not intended to be a one-time effort. Rather, we are building an infrastructure that will facilitate an ongoing incorporation of new scientific findings into seismic-hazard models. The effort involves the development of several community models and databases, one of which is new Java-based code for probabilistic seismic hazard analysis (PSHA). Although several different PSHA codes presently exist, none are open source, well documented, and written in an object-oriented programming language (which is ideally suited for PSHA). Furthermore, we need code that is flexible enough to accommodate the wide range of models currently under development in RELM. The new code is being developed under

  4. San Carlo Operaen

    DEFF Research Database (Denmark)

    Holm, Bent

    2005-01-01

    En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....

  5. SAN CARLOS APACHE PAPERS.

    Science.gov (United States)

    ROESSEL, ROBERT A., JR.

    THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…

  6. Empirical likelihood method for non-ignorable missing data problems.

    Science.gov (United States)

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  7. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  8. Scale invariant for one-sided multivariate likelihood ratio tests

    Directory of Open Access Journals (Sweden)

    Samruam Chongcharoen

    2010-07-01

    Full Text Available Suppose 1 2 , ,..., n X X X is a random sample from Np ( ,V distribution. Consider 0 1 2 : ... 0 p H      and1 : 0 for 1, 2,..., i H   i  p , let 1 0 H  H denote the hypothesis that 1 H holds but 0 H does not, and let ~ 0 H denote thehypothesis that 0 H does not hold. Because the likelihood ratio test (LRT of 0 H versus 1 0 H  H is complicated, severalad hoc tests have been proposed. Tang, Gnecco and Geller (1989 proposed an approximate LRT, Follmann (1996 suggestedrejecting 0 H if the usual test of 0 H versus ~ 0 H rejects 0 H with significance level 2 and a weighted sum of the samplemeans is positive, and Chongcharoen, Singh and Wright (2002 modified Follmann’s test to include information about thecorrelation structure in the sum of the sample means. Chongcharoen and Wright (2007, 2006 give versions of the Tang-Gnecco-Geller tests and Follmann-type tests, respectively, with invariance properties. With LRT’s scale invariant desiredproperty, we investigate its powers by using Monte Carlo techniques and compare them with the tests which we recommendin Chongcharoen and Wright (2007, 2006.

  9. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  10. Regions of constrained maximum likelihood parameter identifiability

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1975-01-01

    This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.

  11. Analysis of light incident location and detector position in early diagnosis of knee osteoarthritis by Monte Carlo simulation

    Science.gov (United States)

    Chen, Yanping; Chen, Yisha; Yan, Huangping; Wang, Xiaoling

    2017-01-01

    Early detection of knee osteoarthritis (KOA) is meaningful to delay or prevent the onset of osteoarthritis. In consideration of structural complexity of knee joint, position of light incidence and detector appears to be extremely important in optical inspection. In this paper, the propagation of 780-nm near infrared photons in three-dimensional knee joint model is simulated by Monte Carlo (MC) method. Six light incident locations are chosen in total to analyze the influence of incident and detecting location on the number of detected signal photons and signal to noise ratio (SNR). Firstly, a three-dimensional photon propagation model of knee joint is reconstructed based on CT images. Then, MC simulation is performed to study the propagation of photons in three-dimensional knee joint model. Photons which finally migrate out of knee joint surface are numerically analyzed. By analyzing the number of signal photons and SNR from the six given incident locations, the optimal incident and detecting location is defined. Finally, a series of phantom experiments are conducted to verify the simulation results. According to the simulation and phantom experiments results, the best incident location is near the right side of meniscus at the rear end of left knee joint and the detector is supposed to be set near patella, correspondingly.

  12. A Monte Carlo Analysis of Weight Data from UF6 Cylinder Feed and Withdrawal Stations

    Energy Technology Data Exchange (ETDEWEB)

    Garner, James R [ORNL; Whitaker, J Michael [ORNL

    2015-01-01

    As the number of nuclear facilities handling uranium hexafluoride (UF6) cylinders (e.g., UF6 production, enrichment, and fuel fabrication) increase in number and throughput, more automated safeguards measures will likely be needed to enable the International Atomic Energy Agency (IAEA) to achieve its safeguards objectives in a fiscally constrained environment. Monitoring the process data from the load cells built into the cylinder feed and withdrawal (F/W) stations (i.e., cylinder weight data) can significantly increase the IAEA’s ability to efficiently achieve the fundamental safeguards task of confirming operations as declared (i.e., no undeclared activities). Researchers at the Oak Ridge National Laboratory, Los Alamos National Laboratory, the Joint Research Center (in Ispra, Italy), and University of Glasgow are investigating how this weight data can be used for IAEA safeguards purposes while fully protecting the operator’s proprietary and sensitive information related to operations. A key question that must be resolved is, what is the necessary frequency of recording data from the process F/W stations to achieve safeguards objectives? This paper summarizes Monte Carlo simulations of typical feed, product, and tails withdrawal cycles and evaluates longer sampling frequencies to determine the expected errors caused by low-frequency sampling and its impact on material balance calculations.

  13. On-the-fly estimation strategy for uncertainty propagation in two-step Monte Carlo calculation for residual radiation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Han, Gi Young; Seo, Bo Kyun [Korea Institute of Nuclear Safety,, Daejeon (Korea, Republic of); Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Sun, Gwang Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-06-15

    In analyzing residual radiation, researchers generally use a two-step Monte Carlo (MC) simulation. The first step (MC1) simulates neutron transport, and the second step (MC2) transports the decay photons emitted from the activated materials. In this process, the stochastic uncertainty estimated by the MC2 appears only as a final result, but it is underestimated because the stochastic error generated in MC1 cannot be directly included in MC2. Hence, estimating the true stochastic uncertainty requires quantifying the propagation degree of the stochastic error in MC1. The brute force technique is a straightforward method to estimate the true uncertainty. However, it is a costly method to obtain reliable results. Another method, called the adjoint-based method, can reduce the computational time needed to evaluate the true uncertainty; however, there are limitations. To address those limitations, we propose a new strategy to estimate uncertainty propagation without any additional calculations in two-step MC simulations. To verify the proposed method, we applied it to activation benchmark problems and compared the results with those of previous methods. The results show that the proposed method increases the applicability and user-friendliness preserving accuracy in quantifying uncertainty propagation. We expect that the proposed strategy will contribute to efficient and accurate two-step MC calculations.

  14. Likelihood methods and classical burster repetition

    CERN Document Server

    Graziani, C; Graziani, Carlo; Lamb, Donald Q

    1995-01-01

    We develop a likelihood methodology which can be used to search for evidence of burst repetition in the BATSE catalog, and to study the properties of the repetition signal. We use a simplified model of burst repetition in which a number N_{\\rm r} of sources which repeat a fixed number of times N_{\\rm rep} are superposed upon a number N_{\\rm nr} of non-repeating sources. The instrument exposure is explicitly taken into account. By computing the likelihood for the data, we construct a probability distribution in parameter space that may be used to infer the probability that a repetition signal is present, and to estimate the values of the repetition parameters. The likelihood function contains contributions from all the bursts, irrespective of the size of their positional errors --- the more uncertain a burst's position is, the less constraining is its contribution. Thus this approach makes maximal use of the data, and avoids the ambiguities of sample selection associated with data cuts on error circle size. We...

  15. Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation

    CERN Document Server

    Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K

    2015-01-01

    We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...

  16. Monte Carlo Likelihood Estimation of Mixed-Effects State Space Models with Application to HIV Dynamics

    Institute of Scientific and Technical Information of China (English)

    ZHOU Jie; TANG Aiping; FENG Hailin

    2016-01-01

    The statistical inference for generalized mixed-effects state space models (MESSM) are investigated when the random effects are unknown.Two filtering algorithms are designed both of which are based on mixture Kalman filter.These algorithms are particularly useful when the longitudinal measurements are sparse.The authors also propose a globally convergent algorithm for parameter estimation of MESSM which can be used to locate the initial value of parameters for local while more efficient algorithms.Simulation examples are carried out which validate the efficacy of the proposed approaches.A data set from the clinical trial is investigated and a smaller mean square error is achieved compared to the existing results in literatures.

  17. MORSE Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  18. A Bayesian Approach to a Multiple-Group Latent Class-Profile Analysis: The Timing of Drinking Onset and Subsequent Drinking Behaviors among U.S. Adolescents

    Science.gov (United States)

    Chung, Hwan; Anthony, James C.

    2013-01-01

    This article presents a multiple-group latent class-profile analysis (LCPA) by taking a Bayesian approach in which a Markov chain Monte Carlo simulation is employed to achieve more robust estimates for latent growth patterns. This article describes and addresses a label-switching problem that involves the LCPA likelihood function, which has…

  19. Energy dispersive X-ray fluorescence spectroscopy/Monte Carlo simulation approach for the non-destructive analysis of corrosion patina-bearing alloys in archaeological bronzes: The case of the bowl from the Fareleira 3 site (Vidigueira, South Portugal)

    Energy Technology Data Exchange (ETDEWEB)

    Bottaini, C. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Mirão, J. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Geophysics Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal); Figuereido, M. [Archaeologist — Monte da Capelinha, Apartado 54, 7005, São Miguel de Machede, Évora (Portugal); Candeias, A. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Chemistry Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal); Brunetti, A. [Department of Political Science and Communication, University of Sassari, Via Piandanna 2, 07100 Sassari (Italy); Schiavon, N., E-mail: schiavon@uevora.pt [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Geophysics Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal)

    2015-01-01

    Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample. - Highlights: • EDXRF/Monte Carlo simulation is used to characterize an archeological alloy. • EDXRF analysis was performed on cleaned and patina coated areas of the artifact. • EDXRF/Montes Carlo protocol is well suited when a two-layered model is considered. • When the patina is too thick, X-rays from substrate are unable to exit the sample.

  20. Fast and Accurate Identification of Cross-Linked Peptides for the Structural Analysis of Large Protein Complexes and Elucidation of Interaction Networks. / Tahir, Salman; Bukowski-Wills, Jimi-Carlo; Rasmussen, Morten; Rappsilber, Juri

    DEFF Research Database (Denmark)

    Rasmussen, Morten

    Fast and Accurate Identification of Cross-Linked Peptides for the structural analysis of large protein complexes and to elucidate interaction networks. Salman Tahir Jimi-Carlo Bukowski-Wills; Morten Rasmussen; Juri RappsilberWellcome Trust Centre for Cell Biology, Edinburgh , United Kingdom   Novel...

  1. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  2. On divergences tests for composite hypotheses under composite likelihood

    OpenAIRE

    Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos

    2016-01-01

    It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...

  3. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  4. Efficient maximum likelihood parameterization of continuous-time Markov processes

    CERN Document Server

    McGibbon, Robert T

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.

  5. Quantum Monte Carlo simulation

    OpenAIRE

    Wang, Yazhen

    2011-01-01

    Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...

  6. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  7. Generalizing Terwilliger's likelihood approach: a new score statistic to test for genetic association

    OpenAIRE

    Hsu Li; Helmer Quinta; de Visser Marieke CH; Uitte de Willige Shirley; el Galta Rachid; Houwing-Duistermaat Jeanine J

    2007-01-01

    Abstract Background: In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. Results: By means of a simulation study...

  8. Computing Critical Values of Exact Tests by Incorporating Monte Carlo Simulations Combined with Statistical Tables.

    Science.gov (United States)

    Vexler, Albert; Kim, Young Min; Yu, Jihnhee; Lazar, Nicole A; Hutson, Aland

    2014-12-01

    Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p-values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian-type procedures. The p-values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood-type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian-type procedure with a distribution-free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.

  9. CMB Power Spectrum Likelihood with ILC

    CERN Document Server

    Dick, Jason; Delabrouille, Jacques

    2012-01-01

    We extend the ILC method in harmonic space to include the error in its CMB estimate. This allows parameter estimation routines to take into account the effect of the foregrounds as well as the errors in their subtraction in conjunction with the ILC method. Our method requires the use of a model of the foregrounds which we do not develop here. The reduction of the foreground level makes this method less sensitive to unaccounted for errors in the foreground model. Simulations are used to validate the calculations and approximations used in generating this likelihood function.

  10. An improved likelihood model for eye tracking

    DEFF Research Database (Denmark)

    Hammoud, Riad I.; Hansen, Dan Witzner

    2007-01-01

    approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...... are challenging. It proposes a log likelihood-ratio function of foreground and background models in a particle filter-based eye tracking framework. It fuses key information from even, odd infrared fields (dark and bright-pupil) and their corresponding subtractive image into one single observation model...

  11. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  12. Noncentral Chi-Square Versus Normal Distributions in Describing the Likelihood Ratio Statistic: The Univariate Case and Its Multivariate Implication.

    Science.gov (United States)

    Yuan, Ke-Hai

    2008-01-01

    In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the noncentral chi-square distribution is justified by statistical theory. Actually, when the null hypothesis is not trivially violated, the noncentral chi-square distribution cannot describe the LR statistic well even when data are normally distributed and the sample size is large. Using the one-dimensional case, this article provides the details showing that the LR statistic asymptotically follows a normal distribution, which also leads to an asymptotically correct confidence interval for the discrepancy between the null hypothesis/model and the population. For each one-dimensional result, the corresponding results in the higher dimensional case are pointed out and references are provided. Examples with real data illustrate the difference between the noncentral chi-square distribution and the normal distribution. Monte Carlo results compare the strength of the normal distribution against that of the noncentral chi-square distribution. The implication to data analysis is discussed whenever relevant. The development is built upon the concepts of basic calculous, linear algebra, and introductory probability and statistics. The aim is to provide the least technical material for quantitative graduate students in social science to understand the condition and limitation of the noncentral chi-square distribution.

  13. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  14. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  15. Markov chain Monte Carlo inference for Markov jump processes via the linear noise approximation.

    Science.gov (United States)

    Stathopoulos, Vassilios; Girolami, Mark A

    2013-02-13

    Bayesian analysis for Markov jump processes (MJPs) is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding, thus its applicability is limited to a small class of problems. In this paper, we describe the application of Riemann manifold Markov chain Monte Carlo (MCMC) methods using an approximation to the likelihood of the MJP that is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient whereas the convergence rate and mixing of the chains allow for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology.

  16. Density-based Monte Carlo filter and its applications in nonlinear stochastic differential equation models.

    Science.gov (United States)

    Huang, Guanghui; Wan, Jianping; Chen, Hui

    2013-02-01

    Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error.

  17. Inference in HIV dynamics models via hierarchical likelihood

    OpenAIRE

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelih...

  18. A Monte Carlo Library Least Square approach in the Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) process in bulk coal samples

    Science.gov (United States)

    Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis

    2017-01-01

    A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.

  19. Uncertainty of modelled urban peak O3 concentrations and its sensitivity to input data perturbations based on the Monte Carlo analysis

    Science.gov (United States)

    Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.

    2016-09-01

    A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.

  20. Spatial assessment of the economic feasibility of short rotation coppice on radioactively contaminated land in Belarus, Ukraine, and Russia. II. Monte Carlo analysis.

    Science.gov (United States)

    Van Der Perk, Marcel; Burema, Jiske; Vandenhove, Hildegarde; Goor, François; Timofeyev, Sergei

    2004-09-01

    A Monte Carlo analysis of two sequential GIS-embedded submodels, which evaluate the economic feasibility of short rotation coppice (SRC) production and energy conversion in areas contaminated by Chernobyl-derived (137)Cs, was performed to allow for variability of environmental conditions that was not contained in the spatial model inputs. The results from this analysis were compared to the results from the deterministic model presented in part I of this paper. It was concluded that, although the variability in the model results due to within-gridcell variability of the model inputs was considerable, the prediction of the areas where SRC and energy conversion is potentially profitable was robust. If the additional variability in the model input that is not contained in the input maps is also taken into account, the SRC production and energy conversion appears to be potentially profitable at more locations for both the small scale and large scale production scenarios than the model predicted using the deterministic model.

  1. Communication Network Reliability Analysis Monte Carlo method --Status and Prospect%蒙特·卡罗方法的现状和展望

    Institute of Scientific and Technical Information of China (English)

    王建秋

    2011-01-01

    Communication network, transmission network, integrated circuit network, transport network has now been throughout all aspects of social life, their reliability is related to the beneficial to the people's livelihood, their reliability research are very important. Due to the complexity of communication network, the communication network reliability analysis system has the quite great difficulty, so this paper Monte Carlo method, the communication network system reliability analysis research.%通信网络、输电网络、集成电路网络、交通网络等网络现今已遍布社会生活的各个方面,它们的可靠性关系到国计民生,对它们可靠性研究有十分重要的意义。由于通信网络等的复杂度。对通信网络的系统可靠性分析具有相当大的难度,所以本文蒙特·卡罗方法对通信网络系统可靠性分析进行了深入的研究。

  2. MESS (Multi-purpose Exoplanet Simulation System): A Monte Carlo tool for the statistical analysis and prediction of exoplanets search results

    CERN Document Server

    Bonavita, M; Desidera, S; Gratton, R; Janson, M; Beuzit, J L; Kasper, M; Mordasini, C

    2011-01-01

    The high number of planet discoveries made in the last years provides a good sample for statistical analysis, leading to some clues on the distributions of planet parameters, like masses and periods, at least in close proximity to the host star. We likely need to wait for the extremely large telescopes (ELTs) to have an overall view of the extrasolar planetary systems. In this context it would be useful to have a tool that can be used for the interpretation of the present results,and also to predict what the outcomes would be of the future instruments. For this reason we built MESS: a Monte Carlo simulation code which uses either the results of the statistical analysis of the properties of discovered planets, or the results of the planet formation theories, to build synthetic planet populations fully described in terms of frequency, orbital elements and physical properties. They can then be used to either test the consistency of their properties with the observed population of planets given different detectio...

  3. Particle in cell/Monte Carlo collision analysis of the problem of identification of impurities in the gas by the plasma electron spectroscopy method

    Science.gov (United States)

    Kusoglu Sarikaya, C.; Rafatov, I.; Kudryavtsev, A. A.

    2016-06-01

    The work deals with the Particle in Cell/Monte Carlo Collision (PIC/MCC) analysis of the problem of detection and identification of impurities in the nonlocal plasma of gas discharge using the Plasma Electron Spectroscopy (PLES) method. For this purpose, 1d3v PIC/MCC code for numerical simulation of glow discharge with nonlocal electron energy distribution function is developed. The elastic, excitation, and ionization collisions between electron-neutral pairs and isotropic scattering and charge exchange collisions between ion-neutral pairs and Penning ionizations are taken into account. Applicability of the numerical code is verified under the Radio-Frequency capacitively coupled discharge conditions. The efficiency of the code is increased by its parallelization using Open Message Passing Interface. As a demonstration of the PLES method, parallel PIC/MCC code is applied to the direct current glow discharge in helium doped with a small amount of argon. Numerical results are consistent with the theoretical analysis of formation of nonlocal EEDF and existing experimental data.

  4. Nonparametric likelihood based estimation of linear filters for point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2015-01-01

    result is a representation of the gradient of the log-likelihood, which we use to derive computable approximations of the log-likelihood and the gradient by time discretization. These approximations are then used to minimize the approximate penalized log-likelihood. For time and memory efficiency...

  5. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...

  6. 离散化发现过程模型的极大似然估计与贝叶斯估计之对比%Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  7. Application of the Improved Monte Carlo Algorithm in Prediction for Price of Carbon Emission Rate and Analysis on Earnings%改进的 Monte Carlo 算法在碳排放价格预测和收益率分析中的应用

    Institute of Scientific and Technical Information of China (English)

    刘大旭

    2015-01-01

    传统的碳排放价格预测和收益率分析,大多数采用计量经济学方法,对样本数据的真实性和样本数量有着极高的要求。由于 Monte Carlo 算法具有适合小样本数据量分析和预测精度高的优点,将其用于碳排放价格预测和收益率分析。通过维纳运动方程构建出数学模型,之后结合 Monte Carlo 算法,模拟维纳运动过程的变化规律,从而实现碳排放价格的预测。通过 JB 统计量检验和 VAR 检验表明,针对小样本碳排放价格数据,改进的 Monte Carlo 算法具有更高的预测精度,对碳排放价格的描述和刻画能力更强,可以用于为碳排放期货交易和价格确定提供科学决策以及为其他经济领域和方向的定量描述和预测提供依据。%In tradition, most of econometric methods are to be analyzed the carbon price prediction and earnings, but they have a high requirements for the truth and quantity of samples. Monte Carlo Algorithm takes advantage of the high prediction accuracy to suit for analyzing small samples in carbon emission prediction and earning analysis. This paper established the mathematical model based on Wiener motion equations and then combined Monte Carlo Algorithm to simulate the variation of the Wiener movement in order to achieve a carbon price prediction. The results of JB statistics and VAR test showed that improved Monte Carlo Algorithm had a higher prediction accuracy and more stronger in carbon price description and characterization. It was applied to provide a scientific basis for decision-making futures trading and price carbon emissions, it could be used to quantitatively describe and predict the rest of the economy and the direction.

  8. Parameter likelihood of intrinsic ellipticity correlations

    CERN Document Server

    Capranico, Federica; Schaefer, Bjoern Malte

    2012-01-01

    Subject of this paper are the statistical properties of ellipticity alignments between galaxies evoked by their coupled angular momenta. Starting from physical angular momentum models, we bridge the gap towards ellipticity correlations, ellipticity spectra and derived quantities such as aperture moments, comparing the intrinsic signals with those generated by gravitational lensing, with the projected galaxy sample of EUCLID in mind. We investigate the dependence of intrinsic ellipticity correlations on cosmological parameters and show that intrinsic ellipticity correlations give rise to non-Gaussian likelihoods as a result of nonlinear functional dependencies. Comparing intrinsic ellipticity spectra to weak lensing spectra we quantify the magnitude of their contaminating effect on the estimation of cosmological parameters and find that biases on dark energy parameters are very small in an angular-momentum based model in contrast to the linear alignment model commonly used. Finally, we quantify whether intrins...

  9. Groups, information theory, and Einstein's likelihood principle

    Science.gov (United States)

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.

  10. Dishonestly increasing the likelihood of winning

    Directory of Open Access Journals (Sweden)

    Shaul Shalvi

    2012-05-01

    Full Text Available People not only seek to avoid losses or secure gains; they also attempt to create opportunities for obtaining positive outcomes. When distributing money between gambles with equal probabilities, people often invest in turning negative gambles into positive ones, even at a cost of reduced expected value. Results of an experiment revealed that (1 the preference to turn a negative outcome into a positive outcome exists when people's ability to do so depends on their performance levels (rather than merely on their choice, (2 this preference is amplified when the likelihood to turn negative into positive is high rather than low, and (3 this preference is attenuated when people can lie about their performance levels, allowing them to turn negative into positive not by performing better but rather by lying about how well they performed.

  11. The Virtual Monte Carlo

    CERN Document Server

    Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas

    2003-01-01

    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.

  12. Monte Carlo analysis of the long-lived fission product neutron capture rates at the Transmutation by Adiabatic Resonance Crossing (TARC) experiment

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, A., E-mail: abanades@etsii.upm.es [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Alvarez-Velarde, F.; Gonzalez-Romero, E.M. [Centro de Investigaciones Medioambientales y Tecnologicas (CIEMAT), Avda. Complutense, 40, Ed. 17, 28040 Madrid (Spain); Ismailov, K. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Lafuente, A. [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Nishihara, K. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Saito, M. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Stanculescu, A. [International Atomic Energy Agency (IAEA), Vienna (Austria); Sugawara, T. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer TARC experiment benchmark capture rates results. Black-Right-Pointing-Pointer Utilization of updated databases, included ADSLib. Black-Right-Pointing-Pointer Self-shielding effect in reactor design for transmutation. Black-Right-Pointing-Pointer Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of {sup 99}Tc, {sup 127}I and {sup 129}I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.

  13. A Monte Carlo Analysis of the Thrust Imbalance for the RSRMV Booster During Both the Ignition Transient and Steady State Operation

    Science.gov (United States)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle

  14. A Monte Carlo Analysis of the Thrust Imbalance for the Space Launch System Booster During Both the Ignition Transient and Steady State Operation

    Science.gov (United States)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.

  15. Diffuse X-ray scattering from 4,4'-dimethoxybenzil, C16H14O4: analysis via automatic refinement of a Monte Carlo model.

    Science.gov (United States)

    Welberry, T R; Heerdegen, A P

    2003-12-01

    A recently developed method for fitting a Monte Carlo computer-simulation model to observed single-crystal diffuse X-ray scattering has been used to study the diffuse scattering in 4,4'-dimethoxybenzil, C16H14O4. A model involving only nine parameters, consisting of seven intermolecular force constants and two intramolecular torsional force constants, was refined to give an agreement factor, omegaR = [sigma omega(deltaI)2/sigma omegaI2(obs)](1/2), of 18.1% for 118 918 data points in two sections of data. The model was purely thermal in nature. The analysis has shown that the most prominent features of the diffraction patterns, viz. diffuse streaks that occur normal to the [101] direction, are due to longitudinal displacement correlations along chains of molecules extending in this direction. These displacements are transmitted from molecule to molecule via contacts involving pairs of hydrogen bonds between adjacent methoxy groups. In contrast to an earlier study of benzil itself, it was not found to be possible to determine, with any degree of certainty, the torsional force constants for rotations about the single bonds in the molecule. It is supposed that this result may be due to the limited data available in the present study.

  16. Temporal relation between the ADC and DC potential responses to transient focal ischemia in the rat: a Markov chain Monte Carlo simulation analysis.

    Science.gov (United States)

    King, Martin D; Crowder, Martin J; Hand, David J; Harris, Neil G; Williams, Stephen R; Obrenovitch, Tihomir P; Gadian, David G

    2003-06-01

    Markov chain Monte Carlo simulation was used in a reanalysis of the longitudinal data obtained by Harris et al. (J Cereb Blood Flow Metab 20:28-36) in a study of the direct current (DC) potential and apparent diffusion coefficient (ADC) responses to focal ischemia. The main purpose was to provide a formal analysis of the temporal relationship between the ADC and DC responses, to explore the possible involvement of a common latent (driving) process. A Bayesian nonlinear hierarchical random coefficients model was adopted. DC and ADC transition parameter posterior probability distributions were generated using three parallel Markov chains created using the Metropolis algorithm. Particular attention was paid to the within-subject differences between the DC and ADC time course characteristics. The results show that the DC response is biphasic, whereas the ADC exhibits monophasic behavior, and that the two DC components are each distinguishable from the ADC response in their time dependencies. The DC and ADC changes are not, therefore, driven by a common latent process. This work demonstrates a general analytical approach to the multivariate, longitudinal data-processing problem that commonly arises in stroke and other biomedical research.

  17. Analysis of Investment Risk Based on Monte Carlo Method%蒙特卡洛法在投资项目风险分析中的应用

    Institute of Scientific and Technical Information of China (English)

    王霞; 张本涛; 马庆

    2011-01-01

    本文以经济净现值为评价指标来度量项目的投资风险,确定各影响因素的概率分布,建立了基于三角分布的风险评价的随机模型,采用蒙特卡罗方法进行模拟,利用MATLAB编程实现,得到投资项目的净现值频数分布的直方图和累计频率曲线图,并对模拟结果进行统计和分析,可得到净现值的平均预测值以及风险率,为评价投资项目的风险提供了理论依据.%In this paper, based on the important economic evaluation index NPV, the paper measures the risk of investment projects, determines the probability distribution of various factors, establishes the risk evaluation of the stochastic model based on the triangular distribution, which is simulated using Monte Carlo method, and realises by MATLAB programming, then can get the frequency distribution histograms and cumulative frequency curve of the net present value of investment projects, predictive average value and the rate risk are obtained by statistic analysis,providing a theoretical basis for risk evaluation of investment projects.

  18. Analysis of intervention strategies for inhalation exposure to polycyclic aromatic hydrocarbons and associated lung cancer risk based on a Monte Carlo population exposure assessment model.

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    Full Text Available It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs, a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF and potential impact fraction (PIF of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making.

  19. Analysis of near-infrared spectroscopy and indocyanine green dye dilution with Monte Carlo simulation of light propagation in the adult brain

    Science.gov (United States)

    Mudra, Regina M.; Nadler, Andreas; Keller, Emanuela; Niederer, Peter F.

    2006-07-01

    Near-infrared spectroscopy (NIRS) combined with indocyanine green (ICG) dilution is applied externally on the head to determine the cerebral hemodynamics of neurointensive care patients. We applied Monte Carlo simulation for the analysis of a number of problems associated with this method. First, the contamination of the optical density (OD) signal due to the extracerebral tissue was assessed. Second, the measured OD signal depends essentially on the relative blood content (with respect to its absorption) in the various transilluminated tissues. To take this into account, we weighted the calculated densities of the photon distribution under baseline conditions within the different tissues with the changes and aberration of the relative blood volumes that are typically observed under healthy and pathologic conditions. Third, in case of NIRS ICG dye dilution, an ICG bolus replaces part of the blood such that a transient change of absorption in the brain tissues occurs that can be recorded in the OD signal. Our results indicate that for an exchange fraction of Δ=30% of the relative blood volume within the intracerebral tissue, the OD signal is determined from 64 to 74% by the gray matter and between 8 to 16% by the white matter maximally for a distance of d=4.5 cm.

  20. Radiation shielding analysis and optimisation for the Mineral-PET Kimberlite sorting facility using the Monte Carlo calculation code MCNPX

    OpenAIRE

    2014-01-01

    M.Phil. (Energy Studies) This dissertation details the radiation shielding analysis and optimization performed to design a shield for the mineral-PET (Positron Emission Tomography) facility. PET is a nuclear imaging technique currently used in diagnostic medicine. The technique is based on the detection of 511 keV coincident and co-linear photons produced from the annihilation of a positron (produced by a positron emitter) and a nearby electron. The technique is now being considered for th...

  1. MCPerm: a Monte Carlo permutation method for accurately correcting the multiple testing in a meta-analysis of genetic association studies.

    Directory of Open Access Journals (Sweden)

    Yongshuai Jiang

    Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.

  2. The Multi-Mission Maximum Likelihood framework (3ML)

    CERN Document Server

    Vianello, Giacomo; Younk, Patrick; Tibaldo, Luigi; Burgess, James M; Ayala, Hugo; Harding, Patrick; Hui, Michelle; Omodei, Nicola; Zhou, Hao

    2015-01-01

    Astrophysical sources are now observed by many different instruments at different wavelengths, from radio to high-energy gamma-rays, with an unprecedented quality. Putting all these data together to form a coherent view, however, is a very difficult task. Each instrument has its own data format, software and analysis procedure, which are difficult to combine. It is for example very challenging to perform a broadband fit of the energy spectrum of the source. The Multi-Mission Maximum Likelihood framework (3ML) aims to solve this issue, providing a common framework which allows for a coherent modeling of sources using all the available data, independent of their origin. At the same time, thanks to its architecture based on plug-ins, 3ML uses the existing official software of each instrument for the corresponding data in a way which is transparent to the user. 3ML is based on the likelihood formalism, in which a model summarizing our knowledge about a particular region of the sky is convolved with the instrument...

  3. LikeDM: likelihood calculator of dark matter detection

    CERN Document Server

    Huang, Xiaoyuan; Yuan, Qiang

    2016-01-01

    With the large progresses of searching for dark matter (DM) particles from indirect and direct methods, we develop a numerical tool which enables fast calculation of the likelihood of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), $\\gamma$-rays from Fermi space telescope, and the underground direct detection experiments. The purpose of this tool, \\likedm\\ --- likelihood calculator of dark matter detection, is to bridge the particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi $\\gamma$-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints of charged cosmic and gamma rays and the direct detection part will be implemented in the next version. This manual de...

  4. tmle : An R Package for Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Susan Gruber

    2012-11-01

    Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.

  5. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    Science.gov (United States)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  6. Carlo Caso (1940 - 2007)

    CERN Multimedia

    Leonardo Rossi

    Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...

  7. Carlos Vesga Duarte

    OpenAIRE

    Pedro Medina Avendaño

    1981-01-01

    Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas

  8. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    Science.gov (United States)

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  9. Monte Carlo Estimation of the Conditional Rasch Model. Research Report 94-09.

    Science.gov (United States)

    Akkermans, Wies M. W.

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to the conditional estimation of both item and…

  10. Analysis of dpa rates in the HFIR reactor vessel using a hybrid Monte Carlo/deterministic method

    Energy Technology Data Exchange (ETDEWEB)

    Blakeman, Edward [Retired

    2016-01-01

    The Oak Ridge High Flux Isotope Reactor (HFIR), which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa), particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiation transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 below to approximately 12 above the axial extent of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%.

  11. Analysis of dpa Rates in the HFIR Reactor Vessel using a Hybrid Monte Carlo/Deterministic Method*

    Directory of Open Access Journals (Sweden)

    Risner J.M.

    2016-01-01

    Full Text Available The Oak Ridge High Flux Isotope Reactor (HFIR, which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa, particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiation transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 in. below to approximately 12 in. above the height of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%.

  12. Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm

    Directory of Open Access Journals (Sweden)

    Raj Kumar

    2012-12-01

    Full Text Available In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods. The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.

  13. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    Science.gov (United States)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-10-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.

  14. Monte Carlo design of a system for the detection of explosive materials and analysis of the dose; Diseno Monte Carlo de un sistema para la deteccion de materiales explosivos y analisis de la dosis

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez A, P. L.; Medina C, D.; Rodriguez I, J. L.; Salas L, M. A.; Vega C, H. R., E-mail: pabloyae_2@hotmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    The problems associated with insecurity and terrorism have forced to designing systems for detecting nuclear materials, drugs and explosives that are installed on roads, ports and airports. Organic materials are composed of C, H, O and N; similarly the explosive materials are manufactured which can be distinguished by the concentration of these elements. Its elemental composition, particularly the concentration of hydrogen and oxygen, allow distinguish them from other organic substances. When these materials are irradiated with neutrons nuclear reactions (n, γ) are produced, where the emitted photons are ready gamma rays whose energy is characteristic of each element and its abundance allows estimating their concentration. The aim of this study was designed using Monte Carlo methods a system with neutron source, gamma rays detector and moderator able to distinguish the presence of Rdx and urea. In design were used as moderators: paraffin, light water, polyethylene and graphite; as detectors were used HPGe and the NaI(Tl). The design that showed the best performance was the moderator of light water and HPGe, with a source of {sup 241}AmBe. For this design, the values of ambient dose equivalent around the system were calculated. (Author)

  15. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  16. Transfer Entropy as a Log-Likelihood Ratio

    Science.gov (United States)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  17. Educational differences in likelihood of attributing breast symptoms to cancer

    DEFF Research Database (Denmark)

    Marcu, Afrodita; Lyratzopoulos, Georgios; Black, Georgia;

    2016-01-01

    asked to indicate their level of agreement with a cancer avoidance statement ('I would not want to know if I have breast cancer'). RESULTS: Women were more likely to mention cancer as a possible cause of an axillary lump (64%) compared with nipple rash (30%). In multivariable analysis, low and mid......BACKGROUND: Stage at diagnosis of breast cancer varies by socio-economic status (SES), with lower SES associated with poorer survival. We investigated associations between SES (indexed by education), and the likelihood of attributing breast symptoms to breast cancer. METHOD: We conducted an online...... survey with 961 women (47-92 years) with variable educational levels. Two vignettes depicted familiar and unfamiliar breast changes (axillary lump and nipple rash). Without making breast cancer explicit, women were asked 'What do you think this […..] could be?' After the attribution question, women were...

  18. Maximum likelihood identification of aircraft stability and control derivatives

    Science.gov (United States)

    Mehra, R. K.; Stepner, D. E.; Tyler, J. S.

    1974-01-01

    Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.

  19. Transfer Entropy as a Log-likelihood Ratio

    CERN Document Server

    Barnett, Lionel

    2012-01-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the neurosciences, econometrics and the analysis of complex system dynamics in diverse fields. We show that for a class of parametrised partial Markov models for jointly stochastic processes in discrete time, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. The result generalises the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression. In the general case, an asymptotic $\\chi^2$ distribution for the model transfer entropy estimator is established.

  20. Average Likelihood Methods for Code Division Multiple Access (CDMA)

    Science.gov (United States)

    2014-05-01

    AVERAGE LIKELIHOOD METHODS FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) MAY 2014 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...REPORT 3. DATES COVERED (From - To) OCT 2011 – OCT 2013 4. TITLE AND SUBTITLE AVERAGE LIKELIHOOD METHODS FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) 5a...precision parameter from the joint probability of the code matrix. For a full loaded CDMA signal, the average likelihood depends exclusively on feature

  1. Likelihood ratios: Clinical application in day-to-day practice

    Directory of Open Access Journals (Sweden)

    Parikh Rajul

    2009-01-01

    Full Text Available In this article we provide an introduction to the use of likelihood ratios in clinical ophthalmology. Likelihood ratios permit the best use of clinical test results to establish diagnoses for the individual patient. Examples and step-by-step calculations demonstrate the estimation of pretest probability, pretest odds, and calculation of posttest odds and posttest probability using likelihood ratios. The benefits and limitations of this approach are discussed.

  2. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus.

    Directory of Open Access Journals (Sweden)

    Steven M Carr

    Full Text Available Phylogenomic analysis of highly-resolved intraspecific phylogenies obtained from complete mitochondrial DNA genomes has had great success in clarifying relationships within and among human populations, but has found limited application in other wild species. Analytical challenges include assessment of random versus non-random phylogeographic distributions, and quantification of differences in tree topologies among populations. Harp Seals (Pagophilus groenlandicus Erxleben, 1777 have a biogeographic distribution based on four discrete trans-Atlantic breeding and whelping populations located on "fast ice" attached to land in the White Sea, Greenland Sea, the Labrador ice Front, and Southern Gulf of St Lawrence. This East to West distribution provides a set of a priori phylogeographic hypotheses. Outstanding biogeographic questions include the degree of genetic distinctiveness among these populations, in particular between the Greenland Sea and White Sea grounds. We obtained complete coding-region DNA sequences (15,825 bp for 53 seals. Each seal has a unique mtDNA genome sequence, which differ by 6 ~ 107 substitutions. Six major clades / groups are detectable by parsimony, neighbor-joining, and Bayesian methods, all of which are found in breeding populations on either side of the Atlantic. The species coalescent is at 180 KYA; the most recent clade, which accounts for 66% of the diversity, reflects an expansion during the mid-Wisconsinan glaciation 40~60 KYA. FST is significant only between the White Sea and Greenland Sea or Ice Front populations. Hierarchal AMOVA of 2-, 3-, or 4-island models identifies small but significant ΦSC among populations within groups, but not among groups. A novel Monte-Carlo simulation indicates that the observed distribution of individuals within breeding populations over the phylogenetic tree requires significantly fewer dispersal events than random expectation, consistent with island or a priori East to West 2- or 3

  3. Physiologically-based toxicokinetic model for cadmium using Markov-chain Monte Carlo analysis of concentrations in blood, urine, and kidney cortex from living kidney donors.

    Science.gov (United States)

    Fransson, Martin Niclas; Barregard, Lars; Sallsten, Gerd; Akerstrom, Magnus; Johanson, Gunnar

    2014-10-01

    The health effects of low-level chronic exposure to cadmium are increasingly recognized. To improve the risk assessment, it is essential to know the relation between cadmium intake, body burden, and biomarker levels of cadmium. We combined a physiologically-based toxicokinetic (PBTK) model for cadmium with a data set from healthy kidney donors to re-estimate the model parameters and to test the effects of gender and serum ferritin on systemic uptake. Cadmium levels in whole blood, blood plasma, kidney cortex, and urinary excretion from 82 men and women were used to calculate posterior distributions for model parameters using Markov-chain Monte Carlo analysis. For never- and ever-smokers combined, the daily systemic uptake was estimated at 0.0063 μg cadmium/kg body weight in men, with 35% increased uptake in women and a daily uptake of 1.2 μg for each pack-year per calendar year of smoking. The rate of urinary excretion from cadmium accumulated in the kidney was estimated at 0.000042 day(-1), corresponding to a half-life of 45 years in the kidneys. We have provided an improved model of cadmium kinetics. As the new parameter estimates derive from a single study with measurements in several compartments in each individual, these new estimates are likely to be more accurate than the previous ones where the data used originated from unrelated data sets. The estimated urinary excretion of cadmium accumulated in the kidneys was much lower than previous estimates, neglecting this finding may result in a marked under-prediction of the true kidney burden.

  4. An analysis on changes in reservoir fluid based on numerical simulation of neutron log using a Monte Carlo N-Particle algorithm

    Science.gov (United States)

    Ku, B.; Nam, M.

    2012-12-01

    Neutron logging has been widely used to estimate neutron porosity to evaluate formation properties in oil industry. More recently, neutron logging has been highlighted for monitoring the behavior of CO2 injected into reservoir for geological CO2 sequestration. For a better understanding of neutron log interpretation, Monte Carlo N-Particle (MCNP) algorithm is used to illustrate the response of a neutron tool. In order to obtain calibration curves for the neutron tool, neutron responses are simulated in water-filled limestone, sandstone and dolomite formations of various porosities. Since the salinities (concentration of NaCl) of borehole fluid and formation water are important factors for estimating formation porosity, we first compute and analyze neutron responses for brine-filled formations with different porosities. Further, we consider changes in brine saturation of a reservoir due to hydrocarbon production or geological CO2 sequestration to simulate corresponding neutron logging data. As gas saturation decreases, measured neutron porosity confirms gas effects on neutron logging, which is attributed to the fact that gas has slightly smaller number of hydrogen than brine water. In the meantime, increase in CO2 saturation due to CO2 injection reduces measured neutron porosity giving a clue to estimation the CO2 saturation, since the injected CO2 substitute for the brine water. A further analysis on the reduction gives a strategy for estimating CO2 saturation based on time-lapse neutron logging. This strategy can help monitoring not only geological CO2 sequestration but also CO2 flood for enhanced-oil-recovery. Acknowledgements: This work was supported by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2012T100201588). Myung Jin Nam was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea

  5. Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  6. Monte Carlo and nonlinearities

    CERN Document Server

    Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian

    2016-01-01

    The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...

  7. Carlos Vesga Duarte

    Directory of Open Access Journals (Sweden)

    Pedro Medina Avendaño

    1981-01-01

    Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas

  8. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  9. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...

  10. Who Writes Carlos Bulosan?

    Directory of Open Access Journals (Sweden)

    Charlie Samuya Veric

    2001-12-01

    Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.

  11. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...

  12. Maximum likelihood molecular clock comb: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  13. The structure of the muscle protein complex 4Ca{sup 2+}. Tronponin C*troponin: A Monte Carlo modeling analysis of small-angle X-ray and neutron scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Olah, G.A.; Trewhella, J.

    1995-11-01

    Analysis of scattering data based on a Monte Carlo integration method was used to obtain a low resolution model of the 4Ca2+.troponin c.troponin I complex. This modeling method allows rapid testing of plausible structures where the best fit model can be ascertained by a comparison between model structure scattering profiles and measured scattering data. In the best fit model, troponin I appears as a spiral structure that wraps about 4CA2+.trophonin C which adopts an extended dumbell conformation similar to that observed in the crystal structures of troponin C. The Monte Carlo modeling method can be applied to other biological systems in which detailed structural information is lacking.

  14. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    -dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

  15. Dairy goat kids fed liquid diets in substitution of goat milk and slaughtered at different ages: an economic viability analysis using Monte Carlo techniques.

    Science.gov (United States)

    Knupp, L S; Veloso, C M; Marcondes, M I; Silveira, T S; Silva, A L; Souza, N O; Knupp, S N R; Cannas, A

    2016-03-01

    The aim of this study was to analyze the economic viability of producing dairy goat kids fed liquid diets in alternative of goat milk and slaughtered at two different ages. Forty-eight male newborn Saanen and Alpine kids were selected and allocated to four groups using a completely randomized factorial design: goat milk (GM), cow milk (CM), commercial milk replacer (CMR) and fermented cow colostrum (FC). Each group was then divided into two groups: slaughter at 60 and 90 days of age. The animals received Tifton hay and concentrate ad libitum. The values of total costs of liquid and solid feed plus labor, income and average gross margin were calculated. The data were then analyzed using the Monte Carlo techniques with the @Risk 5.5 software, with 1000 iterations of the variables being studied through the model. The kids fed GM and CMR generated negative profitability values when slaughtered at 60 days (US$ -16.4 and US$ -2.17, respectively) and also at 90 days (US$ -30.8 and US$ -0.18, respectively). The risk analysis showed that there is a 98% probability that profitability would be negative when GM is used. In this regard, CM and FC presented low risk when the kids were slaughtered at 60 days (8.5% and 21.2%, respectively) and an even lower risk when animals were slaughtered at 90 days (5.2% and 3.8%, respectively). The kids fed CM and slaughtered at 90 days presented the highest average gross income (US$ 67.88) and also average gross margin (US$ 18.43/animal). For the 60-day rearing regime to be economically viable, the CMR cost should not exceed 11.47% of the animal-selling price. This implies that the replacer cannot cost more than US$ 0.39 and 0.43/kg for the 60- and 90-day feeding regimes, respectively. The sensitivity analysis showed that the variables with the greatest impact on the final model's results were animal selling price, liquid diet cost, final weight at slaughter and labor. In conclusion, the production of male dairy goat kids can be economically

  16. Empirical likelihood inference for diffusion processes with jumps

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we consider the empirical likelihood inference for the jump-diffusion model. We construct the confidence intervals based on the empirical likelihood for the infinitesimal moments in the jump-diffusion models. They are better than the confidence intervals which are based on the asymptotic normality of point estimates.

  17. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  18. Planck intermediate results: XVI. Profile likelihoods for cosmological parameters

    DEFF Research Database (Denmark)

    Bartlett, J.G.; Cardoso, J.-F.; Delabrouille, J.;

    2014-01-01

    We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the CDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agr...

  19. Planck 2013 results. XV. CMB power spectra and likelihood

    DEFF Research Database (Denmark)

    Tauber, Jan; Bartlett, J.G.; Bucher, M.;

    2014-01-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...

  20. A New Speaker Verification Method with GlobalSpeaker Model and Likelihood Score Normalization

    Institute of Scientific and Technical Information of China (English)

    张怡颖; 朱小燕; 张钹

    2000-01-01

    In this paper a new text-independent speaker verification method GSMSV is proposed based on likelihood score normalization. In this novel method a global speaker model is established to represent the universal features of speech and normalize the likelihood score. Statistical analysis demonstrates that this normalization method can remove common factors of speech and bring the differences between speakers into prominence. As a result the equal error rate is decreased significantly,verification procedure is accelerated and system adaptability to speaking speed is improved.

  1. Design of Simplified Maximum-Likelihood Receivers for Multiuser CPM Systems

    Directory of Open Access Journals (Sweden)

    Li Bing

    2014-01-01

    Full Text Available A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases reduced complexity and marginal performance degradation.

  2. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    Science.gov (United States)

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  3. Inferring fixed effects in a mixed linear model from an integrated likelihood

    DEFF Research Database (Denmark)

    Gianola, Daniel; Sorensen, Daniel

    2008-01-01

    of all nuisances, viewing random effects and variance components as missing data. In a simulation of a grazing trial, the procedure was compared with four widely used estimators of fixed effects in mixed models, and found to be competitive. An analysis of body weight in freshwater crayfish was conducted......A new method for likelihood-based inference of fixed effects in mixed linear models, with variance components treated as nuisance parameters, is presented. The method uses uniform-integration of the likelihood; the implementation employs the expectation-maximization (EM) algorithm for elimination...

  4. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  5. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  6. Ing. Carlos M. Ochoa

    OpenAIRE

    Montesinos A, Fernando; Facultad de Farmacia y Bioquímica de la Universidad Nacional Mayor de San Marcos, Lima, Perú.

    2014-01-01

    Este personaje es un extraordinario investigador dedicado, durante muchos año, al estudio de la papa, tubérculo del genero Solanum y al infinito número de especies y variedades que cubren los territorios del Perú, Bolivia y Chile, y posiblemente otros países. Originalmente silvestre, hoy como resultado del avance científico constituye un alimento de gran valor en el mundo, desde todo punto de vista.Carlos M. Ochoa nació en el Cusco: se trasladó a Bolivia donde llevó a cabo estudios iniciales,...

  7. Incorporating Astrophysical Systematics into a Generalized Likelihood for Cosmology with Type Ia Supernovae

    CERN Document Server

    Ponder, Kara A; Zentner, Andrew R

    2015-01-01

    Traditional cosmological inference using Type Ia supernovae (SNeIa) have used stretch- and color-corrected fits of SN Ia light curves and assumed a resulting fiducial mean and symmetric intrinsic dispersion to the resulting relative luminosity. However, the recent literature has presented mounting evidence that SNeIa have different width-color-corrected luminosities, depending on the environment in which they are found. Such correlations suggest the existence of multiple populations of SNeIa and a non-Gaussian distribution of relative luminosity. We introduce a framework that provides a generalized full-likelihood approach to accommodate multiple populations with unknown population parameters. To illustrate this framework we use a simple model of two populations with a relative shift, independent intrinsic dispersions, and linear redshift evolution of the relative fraction of each population. We generate mock SN Ia data sets from an underlying two-population model and use a Markov Chain Monte Carlo algorithm ...

  8. Likelihood Approach to the First Dark Matter Results from XENON100

    CERN Document Server

    Aprile, E; Arneodo, F; Askin, A; Baudis, L; Behrens, A; Bokeloh, K; Brown, E; Bruch, T; Cardoso, J M R; Choi, B; Cline, D; Duchovni, E; Fattori, S; Ferella, A D; Giboni, K -L; Gross, E; Kish, A; Lam, C W; Lamblin, J; Lang, R F; Lim, K E; Lindemann, S; Lindner, M; Lopes, J A M; Undagoitia, T Marrodán; Mei, Y; Fernandez, A J Melgarejo; Ni, K; Oberlack, U; Orrigo, S E A; Pantic, E; Plante, G; Ribeiro, A C C; Santorelli, R; Santos, J M F dos; Schumann, M; Shagin, P; Teymourian, A; Thers, D; Tziaferi, E; Vitells, O; Wang, H; Weber, M; Weinheimer, C

    2011-01-01

    Many experiments that aim at the direct detection of Dark Matter are able to distinguish a dominant background from the expected feeble signals, based on some measured discrimination parameter. We develop a statistical model for such experiments using the Profile Likelihood ratio as a test statistic in a frequentist approach. We take data from calibrations as control measurements for signal and background, and the method allows the inclusion of data from Monte Carlo simulations. Systematic detector uncertainties, such as uncertainties in the energy scale, as well as astrophysical uncertainties, are included in the model. The statistical model can be used to either set an exclusion limit or to make a discovery claim, and the results are derived with a proper treatment of statistical and systematic uncertainties. We apply the model to the first data release of the XENON100 experiment, which allows to extract additional information from the data, and place stronger limits on the spin-independent elastic WIMP-nuc...

  9. Extracting Angular Observables without a Likelihood and Applications to Rare Decays

    CERN Document Server

    Beaujean, Frederik; Serra, Nicola; van Dyk, Danny

    2015-01-01

    Our goal is to obtain a complete set of angular observables arising in a generic multi-body process. We show how this can be achieved without the need to carry out a likelihood fit of the angular distribution to the measured events. Instead, we apply the method of moments that relies both on the orthogonality of angular functions and the estimation of integrals by Monte Carlo techniques. The big advantage of this method is that the joint distribution of all observables can be easily extracted, even for very few events. The method of moments is shown to be robust against mismodeling of the angular distribution. Our main result is an explicit algorithm that accounts for systematic uncertainties from detector-resolution and acceptance effects. Finally, we present the necessary process-dependent formulae needed for direct application of the method to several rare decays of interest.

  10. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2004-01-01

    Full Text Available Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gibbs sampling, whereas the maximization step is deterministic. Ranking rules based on the conditional probability of membership in a putative group of uninfected animals, given the somatic cell information, are discussed. Several extensions of the model are suggested.

  11. Keno-Nr a Monte Carlo Code Simulating the Californium -252-SOURCE-DRIVEN Noise Analysis Experimental Method for Determining Subcriticality

    Science.gov (United States)

    Ficaro, Edward Patrick

    The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of

  12. Monte Carlo simulation analysis of ceftobiprole, dalbavancin, daptomycin, tigecycline, linezolid and vancomycin pharmacodynamics against intensive care unit-isolated methicillin-resistant Staphylococcus aureus.

    Science.gov (United States)

    Salem, Ahmed Hamed; Zhanel, George G; Ibrahim, Safaa A; Noreddin, Ayman M

    2014-06-01

    The aim of the present study was to compare the potential of ceftobiprole, dalbavancin, daptomycin, tigecycline, linezolid and vancomycin to achieve their requisite pharmacokinetic/pharmacodynamic (PK/PD) targets against methicillin-resistant Staphylococcus aureus isolates collected from intensive care unit (ICU) settings. Monte Carlo simulations were carried out to simulate the PK/PD indices of the investigated antimicrobials. The probability of target attainment (PTA) was estimated at minimum inhibitory concentration values ranging from 0.03 to 32 μg/mL to define the PK/PD susceptibility breakpoints. The cumulative fraction of response (CFR) was computed using minimum inhibitory concentration data from the Canadian National Intensive Care Unit study. Analysis of the simulation results suggested the breakpoints of 4 μg/mL for ceftobiprole (500 mg/2 h t.i.d.), 0.25 μg/mL for dalbavancin (1000 mg), 0.12 μg/mL for daptomycin (4 mg/kg q.d. and 6 mg/kg q.d.) and tigecycline (50 mg b.i.d.), and 2 μg/mL for linezolid (600 mg b.i.d.) and vancomycin (1 g b.i.d. and 1.5 g b.i.d.). The estimated CFR were 100, 100, 70.6, 88.8, 96.5, 82.4, 89.4, and 98.3% for ceftobiprole, dalbavancin, daptomycin (4 mg/kg/day), daptomycin (6 mg/kg/day), linezolid, tigecycline, vancomycin (1 g b.i.d.) and vancomycin (1.5 g b.i.d.), respectively. In conclusion, ceftobiprole and dalbavancin have the highest probability of achieving their requisite PK/PD targets against methicillin-resistant Staphylococcus aureus isolated from ICU settings. The susceptibility predictions suggested a reduction of the vancomycin breakpoint to 1 μg/mL.

  13. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Representing Single-Subject Multivariate Time-Series

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Nesselroade, J.R.

    1998-01-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM), w

  14. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    Science.gov (United States)

    A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...

  15. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  16. Analysis of the radiation shielding of the bunker of a 230MeV proton cyclotron therapy facility; comparison of analytical and Monte Carlo techniques.

    Science.gov (United States)

    Sunil, C

    2016-04-01

    The neutron ambient dose equivalent outside the radiation shield of a proton therapy cyclotron vault is estimated using the unshielded dose equivalent rates and the attenuation lengths obtained from the literature and by simulations carried out with the FLUKA Monte Carlo radiation transport code. The source terms derived from the literature and that obtained from the FLUKA calculations differ by a factor of 2-3, while the attenuation lengths obtained from the literature differ by 20-40%. The instantaneous dose equivalent rates outside the shield differ by a few orders of magnitude, not only in comparison with the Monte Carlo simulation results, but also with the results obtained by line of sight attenuation calculations with the different parameters obtained from the literature. The attenuation of neutrons caused by the presence of bulk iron, such as magnet yokes is expected to reduce the dose equivalent by as much as a couple of orders of magnitude outside the shield walls.

  17. Application of the Monte Carlo method to the analysis of measurement geometries for the calibration of a HP Ge detector in an environmental radioactivity laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, Jose [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: jrodenas@iqn.upv.es; Gallardo, Sergio; Ballester, Silvia; Primault, Virginie [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Ortiz, Josefina [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)

    2007-10-15

    A gamma spectrometer including an HP Ge detector is commonly used for environmental radioactivity measurements. The efficiency of the detector should be calibrated for each geometry considered. Simulation of the calibration procedure with a validated computer program is an important auxiliary tool for environmental radioactivity laboratories. The MCNP code based on the Monte Carlo method has been applied to simulate the detection process in order to obtain spectrum peaks and determine the efficiency curve for each modelled geometry. The source used for measurements was a calibration mixed radionuclide gamma reference solution, covering a wide energy range (50-2000 keV). Two measurement geometries - Marinelli beaker and Petri boxes - as well as different materials - water, charcoal, sand - containing the source have been considered. Results obtained from the Monte Carlo model have been compared with experimental measurements in the laboratory in order to validate the model.

  18. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  19. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  20. to the Scientific Opinion on a quantitative pathway analysis of the likelihood of Tilletia indica M. introduction into EU with importation of US wheat of EFSA Panel on Plant Health (PLH)

    DEFF Research Database (Denmark)

    Baker, R.; Candresse, T.; Dormannsné Simon, E.

    2010-01-01

    The EFSA Plant Health Unit asked the Assessment Methodology Unit to recalculate and update the Quantitative Pathway Analysis of the exposure of the EU wheat production area with Tilletia indica M. teliospores provided by USDA APHIS. Simulations were computed, for importations of US wheat into EU ...... Panel on Plant Health (PLH) (EFSA 2010...

  1. Likelihood ratio model for classification of forensic evidence

    Energy Technology Data Exchange (ETDEWEB)

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  2. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  3. Empirical Likelihood Ratio Confidence Interval for Positively Associated Series

    Institute of Scientific and Technical Information of China (English)

    Jun-jian Zhang

    2007-01-01

    Empirical likelihood is discussed by using the blockwise technique for strongly stationary,positively associated random variables.Our results show that the statistics is asymptotically chi-square distributed and the corresponding confidence interval can be constructed.

  4. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    2012-01-01

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters......likelihood estimators. To this end we prove weak convergence of the conditional likelihood as a continuous stochastic...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...

  5. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    values Xº-n, n = 0, 1, ..., under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to ?find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II...

  6. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    values X0-n, n = 0, 1,...,under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d-b; where d ≥ b > 1/2 are parameters to be estimated. We model the data X1,...,XT given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II....

  7. Maximum Likelihood Factor Structure of the Family Environment Scale.

    Science.gov (United States)

    Fowler, Patrick C.

    1981-01-01

    Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)

  8. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  9. Monte Carlo methods for electromagnetics

    CERN Document Server

    Sadiku, Matthew NO

    2009-01-01

    Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...

  10. A notion of graph likelihood and an infinite monkey theorem

    CERN Document Server

    Banerji, Christopher R S; Severini, Simone

    2013-01-01

    We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.

  11. A notion of graph likelihood and an infinite monkey theorem

    Science.gov (United States)

    Banerji, Christopher R. S.; Mansour, Toufik; Severini, Simone

    2014-01-01

    We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.

  12. Conditional likelihood inference in generalized linear mixed models.

    OpenAIRE

    Sartori, Nicola; Severini , T.A

    2002-01-01

    Consider a generalized linear model with a canonical link function, containing both fixed and random effects. In this paper, we consider inference about the fixed effects based on a conditional likelihood function. It is shown that this conditional likelihood function is valid for any distribution of the random effects and, hence, the resulting inferences about the fixed effects are insensitive to misspecification of the random effects distribution. Inferences based on the conditional likelih...

  13. Sieve likelihood ratio inference on general parameter space

    Institute of Scientific and Technical Information of China (English)

    SHEN Xiaotong; SHI Jian

    2005-01-01

    In this paper,a theory on sieve likelihood ratio inference on general parameter spaces(including infinite dimensional) is studied.Under fairly general regularity conditions,the sieve log-likelihood ratio statistic is proved to be asymptotically x2 distributed,which can be viewed as a generalization of the well-known Wilks' theorem.As an example,a emiparametric partial linear model is investigated.

  14. Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization

    OpenAIRE

    Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue

    2010-01-01

    This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...

  15. Employee Likelihood of Purchasing Health Insurance using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2012-01-01

    Full Text Available Many believe that employees health and economic factors plays an important role in their likelihood to purchase health insurance. However decision to purchase health insurance is not trivial matters as many risk factors that influence decision. This paper presents a decision model using fuzzy inference system to identify the likelihoods of purchasing health insurance based on the selected risk factors. To build the likelihoods, data from one hundred and twenty eight employees at five organizations under the purview of Kota Star Municipality Malaysia were collected to provide input data. Three risk factors were considered as the input of the system including age, salary and risk of having illness. The likelihoods of purchasing health insurance was the output of the system and defined in three linguistic terms of Low, Medium and High. Input and output data were governed by the Mamdani inference rules of the system to decide the best linguistic term. The linguistic terms that describe the likelihoods of purchasing health insurance were identified by the system based on the three risk factors. It is found that twenty seven employees were likely to purchase health insurance at Low level and fifty six employees show their likelihoods at High level. The usage of fuzzy inference system would offer possible justifications to set a new approach in identifying prospective health insurance purchasers.

  16. Parametric likelihood inference for interval censored competing risks data.

    Science.gov (United States)

    Hudgens, Michael G; Li, Chenxi; Fine, Jason P

    2014-03-01

    Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.

  17. Empirical Likelihood-Based ANOVA for Trimmed Means

    Science.gov (United States)

    Velina, Mara; Valeinis, Janis; Greco, Luca; Luta, George

    2016-01-01

    In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. PMID:27690063

  18. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  19. Evidence for extra radiation? Profile likelihood versus Bayesian posterior

    CERN Document Server

    Hamann, Jan

    2011-01-01

    A number of recent analyses of cosmological data have reported hints for the presence of extra radiation beyond the standard model expectation. In order to test the robustness of these claims under different methods of constructing parameter constraints, we perform a Bayesian posterior-based and a likelihood profile-based analysis of current data. We confirm the presence of a slight discrepancy between posterior- and profile-based constraints, with the marginalised posterior preferring higher values of the effective number of neutrino species N_eff. This can be traced back to a volume effect occurring during the marginalisation process, and we demonstrate that the effect is related to the fact that cosmic microwave background (CMB) data constrain N_eff only indirectly via the redshift of matter-radiation equality. Once present CMB data are combined with external information about, e.g., the Hubble parameter, the difference between the methods becomes small compared to the uncertainty of N_eff. We conclude tha...

  20. The Likelihood of Experiencing Relative Poverty over the Life Course.

    Science.gov (United States)

    Rank, Mark R; Hirschl, Thomas A

    2015-01-01

    Research on poverty in the United States has largely consisted of examining cross-sectional levels of absolute poverty. In this analysis, we focus on understanding relative poverty within a life course context. Specifically, we analyze the likelihood of individuals falling below the 20th percentile and the 10th percentile of the income distribution between the ages of 25 and 60. A series of life tables are constructed using the nationally representative Panel Study of Income Dynamics data set. This includes panel data from 1968 through 2011. Results indicate that the prevalence of relative poverty is quite high. Consequently, between the ages of 25 to 60, 61.8 percent of the population will experience a year below the 20th percentile, and 42.1 percent will experience a year below the 10th percentile. Characteristics associated with experiencing these levels of poverty include those who are younger, nonwhite, female, not married, with 12 years or less of education, or who have a work disability.