WorldWideScience

Sample records for level statistical simulation

  1. Quantum level statistics of pseudointegrable billiards

    International Nuclear Information System (INIS)

    Cheon, T.; Cohen, T.D.

    1989-01-01

    We study the spectral statistics of systems of two-dimensional pseudointegrable billiards. These systems are classically nonergodic, but nonseparable. It is found that such systems possess quantum spectra which are closely simulated by the Gaussian orthogonal ensemble. We discuss the implications of these results on the conjectured relation between classical chaos and quantum level statistics. We emphasize the importance of the semiclassical nature of any such relation

  2. Electron Energy Level Statistics in Graphene Quantum Dots

    NARCIS (Netherlands)

    De Raedt, H.; Katsnellson, M. I.; Katsnelson, M.I.

    2008-01-01

    Motivated by recent experimental observations of size quantization of electron energy levels in graphene quantum dots [7] we investigate the level statistics in the simplest tight-binding model for different dot shapes by computer simulation. The results are in a reasonable agreement with the

  3. Managing Macroeconomic Risks by Using Statistical Simulation

    Directory of Open Access Journals (Sweden)

    Merkaš Zvonko

    2017-06-01

    Full Text Available The paper analyzes the possibilities of using statistical simulation in the macroeconomic risks measurement. At the level of the whole world, macroeconomic risks are, due to the excessive imbalance, significantly increased. Using analytical statistical methods and Monte Carlo simulation, the authors interpret the collected data sets, compare and analyze them in order to mitigate potential risks. The empirical part of the study is a qualitative case study that uses statistical methods and Monte Carlo simulation for managing macroeconomic risks, which is the central theme of this work. Application of statistical simulation is necessary because the system, for which it is necessary to specify the model, is too complex for an analytical approach. The objective of the paper is to point out the previous need for consideration of significant macroeconomic risks, particularly in terms of the number of the unemployed in the society, the movement of gross domestic product and the country’s credit rating, and the use of data previously processed by statistical methods, through statistical simulation, to analyze the existing model of managing the macroeconomic risks and suggest elements for a management model development that will allow, with the lowest possible probability and consequences, the emergence of the recent macroeconomic risks. The stochastic characteristics of the system, defined by random variables as input values defined by probability distributions, require the performance of a large number of iterations on which to record the output of the model and calculate the mathematical expectations. The paper expounds the basic procedures and techniques of discrete statistical simulation applied to systems that can be characterized by a number of events which represent a set of circumstances that have caused a change in the system’s state and the possibility of its application in the field of assessment of macroeconomic risks. The method has no

  4. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  5. Cognitive Transfer Outcomes for a Simulation-Based Introductory Statistics Curriculum

    Science.gov (United States)

    Backman, Matthew D.; Delmas, Robert C.; Garfield, Joan

    2017-01-01

    Cognitive transfer is the ability to apply learned skills and knowledge to new applications and contexts. This investigation evaluates cognitive transfer outcomes for a tertiary-level introductory statistics course using the CATALST curriculum, which exclusively used simulation-based methods to develop foundations of statistical inference. A…

  6. Topics in computer simulations of statistical systems

    International Nuclear Information System (INIS)

    Salvador, R.S.

    1987-01-01

    Several computer simulations studying a variety of topics in statistical mechanics and lattice gauge theories are performed. The first study describes a Monte Carlo simulation performed on Ising systems defined on Sierpinsky carpets of dimensions between one and four. The critical coupling and the exponent γ are measured as a function of dimension. The Ising gauge theory in d = 4 - epsilon, for epsilon → 0 + , is then studied by performing a Monte Carlo simulation for the theory defined on fractals. A high statistics Monte Carlo simulation for the three-dimensional Ising model is presented for lattices of sizes 8 3 to 44 3 . All the data obtained agrees completely, within statistical errors, with the forms predicted by finite-sizing scaling. Finally, a method to estimate numerically the partition function of statistical systems is developed

  7. 7th International Workshop on Statistical Simulation

    CERN Document Server

    Mignani, Stefania; Monari, Paola; Salmaso, Luigi

    2014-01-01

    The Department of Statistical Sciences of the University of Bologna in collaboration with the Department of Management and Engineering of the University of Padova, the Department of Statistical Modelling of Saint Petersburg State University, and INFORMS Simulation Society sponsored the Seventh Workshop on Simulation. This international conference was devoted to statistical techniques in stochastic simulation, data collection, analysis of scientific experiments, and studies representing broad areas of interest. The previous workshops took place in St. Petersburg, Russia in 1994, 1996, 1998, 2001, 2005, and 2009. The Seventh Workshop took place in the Rimini Campus of the University of Bologna, which is in Rimini’s historical center.

  8. Simulating metabolism with statistical thermodynamics.

    Science.gov (United States)

    Cannon, William R

    2014-01-01

    New methods are needed for large scale modeling of metabolism that predict metabolite levels and characterize the thermodynamics of individual reactions and pathways. Current approaches use either kinetic simulations, which are difficult to extend to large networks of reactions because of the need for rate constants, or flux-based methods, which have a large number of feasible solutions because they are unconstrained by the law of mass action. This report presents an alternative modeling approach based on statistical thermodynamics. The principles of this approach are demonstrated using a simple set of coupled reactions, and then the system is characterized with respect to the changes in energy, entropy, free energy, and entropy production. Finally, the physical and biochemical insights that this approach can provide for metabolism are demonstrated by application to the tricarboxylic acid (TCA) cycle of Escherichia coli. The reaction and pathway thermodynamics are evaluated and predictions are made regarding changes in concentration of TCA cycle intermediates due to 10- and 100-fold changes in the ratio of NAD+:NADH concentrations. Finally, the assumptions and caveats regarding the use of statistical thermodynamics to model non-equilibrium reactions are discussed.

  9. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    Science.gov (United States)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  10. A spatial scan statistic for nonisotropic two-level risk cluster.

    Science.gov (United States)

    Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie

    2012-01-30

    Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Simulation of statistical γ-spectra of highly excited rare earth nuclei

    International Nuclear Information System (INIS)

    Schiller, A.; Munos, G.; Guttormsen, M.; Bergholt, L.; Melby, E.; Rekstad, J.; Siem, S.; Tveter, T.S.

    1997-05-01

    The statistical γ-spectra of highly excited even-even rare earth nuclei are simulated applying appropriate level density and strength function to a given nucleus. Hindrance effects due to K-conservation are taken into account. Simulations are compared to experimental data from the 163 Dy( 3 He,α) 162 Dy and 173 Yb( 3 He,α) 172 Yb reactions. The influence of the K quantum number at higher energies is discussed. 21 refs., 7 figs., 2 tabs

  12. The Communicability of Graphical Alternatives to Tabular Displays of Statistical Simulation Studies

    Science.gov (United States)

    Cook, Alex R.; Teo, Shanice W. L.

    2011-01-01

    Simulation studies are often used to assess the frequency properties and optimality of statistical methods. They are typically reported in tables, which may contain hundreds of figures to be contrasted over multiple dimensions. To assess the degree to which these tables are fit for purpose, we performed a randomised cross-over experiment in which statisticians were asked to extract information from (i) such a table sourced from the literature and (ii) a graphical adaptation designed by the authors, and were timed and assessed for accuracy. We developed hierarchical models accounting for differences between individuals of different experience levels (under- and post-graduate), within experience levels, and between different table-graph pairs. In our experiment, information could be extracted quicker and, for less experienced participants, more accurately from graphical presentations than tabular displays. We also performed a literature review to assess the prevalence of hard-to-interpret design features in tables of simulation studies in three popular statistics journals, finding that many are presented innumerately. We recommend simulation studies be presented in graphical form. PMID:22132184

  13. Statistical Emulator for Expensive Classification Simulators

    Science.gov (United States)

    Ross, Jerret; Samareh, Jamshid A.

    2016-01-01

    Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.

  14. Energy-level statistics and time relaxation in quantum systems

    International Nuclear Information System (INIS)

    Gruver, J.L.; Cerdeira, H.A.; Aliaga, J.; Mello, P.A.; Proto, A.N.

    1997-05-01

    We study a quantum-mechanical system, prepared, at t = 0, in a model state, that subsequently decays into a sea of other states whose energy levels form a discrete spectrum with given statistical properties. An important quantity is the survival probability P(t), defined as the probability, at time t, to find the system in the original model state. Our main purpose is to analyze the influence of the discreteness and statistical properties of the spectrum on the behavior of P(t). Since P(t) itself is a statistical quantity, we restrict our attention to its ensemble average , which is calculated analytically using random-matrix techniques, within certain approximations discussed in the text. We find, for , an exponential decay, followed by a revival, governed by the two-point structure of the statistical spectrum, thus giving a nonzero asymptotic value for large t's. The analytic result compares well with a number of computer simulations, over a time range discussed in the text. (author). 17 refs, 1 fig

  15. Level and width statistics for a decaying chaotic system

    International Nuclear Information System (INIS)

    Mizutori, S.; Zelevinsky, V.G.

    1993-01-01

    The random matrix ensemble of discretized effective non-hermitian hamiltonians is used for studying local correlations and fluctuations of energies and widths in a quantum system where intrinsic levels are coupled to the continuum via a common decay channel. With the use of analytical estimates and numerical simulations, generic properties of statistical observables are obtained for the regimes of weak and strong continuum coupling as well as for the transitional region. Typical signals of the transition (width collectivization, disappearance of level repulsion at small spacings and violation of uniformity along the energy axis) are discussed quantitatively. (orig.)

  16. Statistical inference of level densities from resolved resonance parameters

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1983-08-01

    Level densities are most directly obtained by counting the resonances observed in the resolved resonance range. Even in the measurements, however, weak levels are invariably missed so that one has to estimate their number and add it to the raw count. The main categories of missinglevel estimators are discussed in the present review, viz. (I) ladder methods including those based on the theory of Hamiltonian matrix ensembles (Dyson-Mehta statistics), (II) methods based on comparison with artificial cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (III) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The language of mathematical statistics is employed to clarify the basis of, and the relationship between, the various techniques. Recent progress in the treatment of resolution effects, detection thresholds and p-wave admixture is described. (orig.) [de

  17. Stochastic simulations for the time evolution of systems which obey generalized statistics: fractional exclusion statistics and Gentile's statistics

    International Nuclear Information System (INIS)

    Nemnes, G A; Anghel, D V

    2010-01-01

    We present a stochastic method for the simulation of the time evolution in systems which obey generalized statistics, namely fractional exclusion statistics and Gentile's statistics. The transition rates are derived in the framework of canonical ensembles. This approach introduces a tool for describing interacting fermionic and bosonic systems in non-equilibrium as ideal FES systems, in a computationally efficient manner. The two types of statistics are analyzed comparatively, indicating their intrinsic thermodynamic differences and revealing key aspects related to the species size

  18. IGESS: a statistical approach to integrating individual-level genotype data and summary statistics in genome-wide association studies.

    Science.gov (United States)

    Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben

    2017-09-15

    Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. A statistical-dynamical downscaling procedure for global climate simulations

    International Nuclear Information System (INIS)

    Frey-Buness, A.; Heimann, D.; Sausen, R.; Schumann, U.

    1994-01-01

    A statistical-dynamical downscaling procedure for global climate simulations is described. The procedure is based on the assumption that any regional climate is associated with a specific frequency distribution of classified large-scale weather situations. The frequency distributions are derived from multi-year episodes of low resolution global climate simulations. Highly resolved regional distributions of wind and temperature are calculated with a regional model for each class of large-scale weather situation. They are statistically evaluated by weighting them with the according climate-specific frequency. The procedure is exemplarily applied to the Alpine region for a global climate simulation of the present climate. (orig.)

  20. Employment of Lithuanian Statistical Data Into Tax-Benefit Micro-Simulation Models

    Directory of Open Access Journals (Sweden)

    Viginta Ivaškaitė-Tamošiūnė

    2013-01-01

    Full Text Available In this study, we aim to assess the “best fit” of the existing Lithuanian micro-datasets for constructing a national micro-simulation model. Specifically, we compare and evaluate the potential of two (state level representative micro-data surveys in terms of their potential to simulate Lithuanian (direct taxes, social contributions and social benefits. Both selected datasets contain rich information on the socio-economic and demographical conditions of the country: the Household Budget Survey (HBS for the years 2004 and 2005 and the European Community Statistics on Income and Living Conditions (EU-SILC in Lithuania for the year 2005. The selected databases offer the most comprehensive range of income and other socio-demographic attributes, needed for simulation of tax and contributions’ payers/amounts and benefits’ recipients/amounts. The evaluation of the dataset capacity to simulate these measures is done by a comparative statistical analysis. Among the comparative categories are definitions (of households, incomes, survey collection modes, level of aggregation of various variables, demographic and incomes variables and corresponding numbers (amounts. The comparative analysis of the HBS and EU-SILC datasets shows, that despite embedded differences and shortages regarding simulation capacities of both surveys, these datasets contain valuable and sufficient information for the purpose of simulation of Lithuanian tax-benefit policies. In general a conclusion could be drawn, that HBS offers higher possibilities of simulating the Lithuanian tax-benefit system. This dataset contains more detailed national income categories (i.e. recipients of maternity/paternity insurance, diverse pensions, etc.— information on which is not available in the EU-SILC. The latter dataset does not contain national policy system specific components, but offer information on income aggregates, such all old-age pensions, social exclusion benefits, etc

  1. Statistics of high-level scene context.

    Science.gov (United States)

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics

  2. The semi-empirical low-level background statistics

    International Nuclear Information System (INIS)

    Tran Manh Toan; Nguyen Trieu Tu

    1992-01-01

    A semi-empirical low-level background statistics was proposed. The one can be applied to evaluated the sensitivity of low background systems, and to analyse the statistical error, the 'Rejection' and 'Accordance' criteria for processing of low-level experimental data. (author). 5 refs, 1 figs

  3. Evaluation of clustering statistics with N-body simulations

    International Nuclear Information System (INIS)

    Quinn, T.R.

    1986-01-01

    Two series of N-body simulations are used to determine the effectiveness of various clustering statistics in revealing initial conditions from evolved models. All the simulations contained 16384 particles and were integrated with the PPPM code. One series is a family of models with power at only one wavelength. The family contains five models with the wavelength of the power separated by factors of √2. The second series is a family of all equal power combinations of two wavelengths taken from the first series. The clustering statistics examined are the two point correlation function, the multiplicity function, the nearest neighbor distribution, the void probability distribution, the distribution of counts in cells, and the peculiar velocity distribution. It is found that the covariance function, the nearest neighbor distribution, and the void probability distribution are relatively insensitive to the initial conditions. The distribution of counts in cells show a little more sensitivity, but the multiplicity function is the best of the statistics considered for revealing the initial conditions

  4. Effects of statistical models and items difficulties on making trait-level inferences: A simulation study

    Directory of Open Access Journals (Sweden)

    Nelson Hauck Filho

    2014-12-01

    Full Text Available Researchers dealing with the task of estimating locations of individuals on continuous latent variables may rely on several statistical models described in the literature. However, weighting costs and benefits of using one specific model over alternative models depends on empirical information that is not always clearly available. Therefore, the aim of this simulation study was to compare the performance of seven popular statistical models in providing adequate latent trait estimates in conditions of items difficulties targeted at the sample mean or at the tails of the latent trait distribution. Results suggested an overall tendency of models to provide more accurate estimates of true latent scores when using items targeted at the sample mean of the latent trait distribution. Rating Scale Model, Graded Response Model, and Weighted Least Squares Mean- and Variance-adjusted Confirmatory Factor Analysis yielded the most reliable latent trait estimates, even when applied to inadequate items for the sample distribution of the latent variable. These findings have important implications concerning some popular methodological practices in Psychology and related areas.

  5. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  6. Molecular-Level Simulations of the Turbulent Taylor-Green Flow

    Science.gov (United States)

    Gallis, M. A.; Bitter, N. P.; Koehler, T. P.; Plimpton, S. J.; Torczynski, J. R.; Papadakis, G.

    2017-11-01

    The Direct Simulation Monte Carlo (DSMC) method, a statistical, molecular-level technique that provides accurate solutions to the Boltzmann equation, is applied to the turbulent Taylor-Green vortex flow. The goal of this work is to investigate whether DSMC can accurately simulate energy decay in a turbulent flow. If so, then simulating turbulent flows at the molecular level can provide new insights because the energy decay can be examined in detail from molecular to macroscopic length scales, thereby directly linking molecular relaxation processes to macroscopic transport processes. The DSMC simulations are performed on half a million cores of Sequoia, the 17 Pflop platform at Lawrence Livermore National Laboratory, and the kinetic-energy dissipation rate and the energy spectrum are computed directly from the molecular velocities. The DSMC simulations are found to reproduce the Kolmogorov -5/3 law and to agree with corresponding Navier-Stokes simulations obtained using a spectral method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  7. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  8. Employment of Lithuanian Statistical Data Into Tax-Benefit Micro-Simulation Models

    Directory of Open Access Journals (Sweden)

    Viginta Ivaškaitė-Tamošiūnė

    2012-07-01

    Full Text Available In this study, we aim to assess the “best fit” of the existing Lithuanian micro-datasets for constructing a national micro-simulation model. Specifically, we compare and evaluate the potential of two (state level representative micro-data surveys in terms of their potential to simulate Lithuanian (direct taxes, social contributions and social benefits. Both selected datasets contain rich information on the socio-economic and demographical conditions of the country: the Household Budget Survey (HBS for the years 2004 and 2005 and the European Community Statistics on Income and Living Conditions (EU-SILC in Lithuania for the year 2005. The selected databases offer the most comprehensive range of income and other socio-demographic attributes, needed for simulation of tax and contributions’ payers/amounts and benefits’ recipients/amounts. The evaluation of the dataset capacity to simulate these measures is done by a comparative statistical analysis. Among the comparative categories are definitions (of households, incomes, survey collection modes, level of aggregation of various variables, demographic and incomes variables and corresponding numbers (amounts.The comparative analysis of the HBS and EU-SILC datasets shows, that despite embedded differences and shortages regarding simulation capacities of both surveys, these datasets contain valuable and sufficient information for the purpose of simulation of Lithuanian tax-benefit policies.In general a conclusion could be drawn, that HBS offers higher possibilities of simulating the Lithuanian tax-benefit system. This dataset contains more detailed national income categories (i.e. recipients of maternity/paternity insurance, diverse pensions, etc.— information on which is not available in the EU-SILC. The latter dataset does not contain national policy system specific components, but offer information on income aggregates, such all old-age pensions, social exclusion benefits, etc. Additionally

  9. Direct numerical simulation and statistical analysis of turbulent convection in lead-bismuth

    Energy Technology Data Exchange (ETDEWEB)

    Otic, I.; Grotzbach, G. [Forschungszentrum Karlsruhe GmbH, Institut fuer Kern-und Energietechnik (Germany)

    2003-07-01

    Improved turbulent heat flux models are required to develop and analyze the reactor concept of an lead-bismuth cooled Accelerator-Driven-System. Because of specific properties of many liquid metals we have still no sensors for accurate measurements of the high frequency velocity fluctuations. So, the development of the turbulent heat transfer models which are required in our CFD (computational fluid dynamics) tools needs also data from direct numerical simulations of turbulent flows. We use new simulation results for the model problem of Rayleigh-Benard convection to show some peculiarities of the turbulent natural convection in lead-bismuth (Pr = 0.025). Simulations for this flow at sufficiently large turbulence levels became only recently feasible because this flow requires the resolution of very small velocity scales with the need for recording long-wave structures for the slow changes in the convective temperature field. The results are analyzed regarding the principle convection and heat transfer features. They are also used to perform statistical analysis to show that the currently available modeling is indeed not adequate for these fluids. Basing on the knowledge of the details of the statistical features of turbulence in this convection type and using the two-point correlation technique, a proposal for an improved statistical turbulence model is developed which is expected to account better for the peculiarities of the heat transfer in the turbulent convection in low Prandtl number fluids. (authors)

  10. An introduction to statistical computing a simulation-based approach

    CERN Document Server

    Voss, Jochen

    2014-01-01

    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  11. Parametric Level Statistics in Random Matrix Theory: Exact Solution

    International Nuclear Information System (INIS)

    Kanzieper, E.

    1999-01-01

    During recent several years, the theory of non-Gaussian random matrix ensembles has experienced a sound progress motivated by new ideas in quantum chromodynamics (QCD) and mesoscopic physics. Invariant non-Gaussian random matrix models appear to describe universal features of low-energy part of the spectrum of Dirac operator in QCD, and electron level statistics in normal conducting-superconducting hybrid structures. They also serve as a basis for constructing the toy models of universal spectral statistics expected at the edge of the metal-insulator transition. While conventional spectral statistics has received a detailed study in the context of RMT, quite a bit is known about parametric level statistics in non-Gaussian random matrix models. In this communication we report about exact solution to the problem of parametric level statistics in unitary invariant, U(N), non-Gaussian ensembles of N x N Hermitian random matrices with either soft or strong level confinement. The solution is formulated within the framework of the orthogonal polynomial technique and is shown to depend on both the unfolded two-point scalar kernel and the level confinement through a double integral transformation which, in turn, provides a constructive tool for description of parametric level correlations in non-Gaussian RMT. In the case of soft level confinement, the formalism developed is potentially applicable to a study of parametric level statistics in an important class of random matrix models with finite level compressibility expected to describe a disorder-induced metal-insulator transition. In random matrix ensembles with strong level confinement, the solution presented takes a particular simple form in the thermodynamic limit: In this case, a new intriguing connection relation between the parametric level statistics and the scalar two-point kernel of an unperturbed ensemble is demonstrated to emerge. Extension of the results obtained to higher-order parametric level statistics is

  12. Statistical 3D damage accumulation model for ion implant simulators

    CERN Document Server

    Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  13. Statistical 3D damage accumulation model for ion implant simulators

    International Nuclear Information System (INIS)

    Hernandez-Mangas, J.M.; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M.

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided

  14. Multiple point statistical simulation using uncertain (soft) conditional data

    Science.gov (United States)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  15. Impact of muscular uptake and statistical noise on tumor quantification based on simulated FDG-PET studies

    International Nuclear Information System (INIS)

    Silva-Rodríguez, Jesús; Domínguez-Prado, Inés; Pardo-Montero, Juan; Ruibal, Álvaro

    2017-01-01

    Purpose: The aim of this work is to study the effect of physiological muscular uptake variations and statistical noise on tumor quantification in FDG-PET studies. Methods: We designed a realistic framework based on simulated FDG-PET acquisitions from an anthropomorphic phantom that included different muscular uptake levels and three spherical lung lesions with diameters of 31, 21 and 9 mm. A distribution of muscular uptake levels was obtained from 136 patients remitted to our center for whole-body FDG-PET. Simulated FDG-PET acquisitions were obtained by using the Simulation System for Emission Tomography package (SimSET) Monte Carlo package. Simulated data was reconstructed by using an iterative Ordered Subset Expectation Maximization (OSEM) algorithm implemented in the Software for Tomographic Image Reconstruction (STIR) library. Tumor quantification was carried out by using estimations of SUV max , SUV 50 and SUV mean from different noise realizations, lung lesions and multiple muscular uptakes. Results: Our analysis provided quantification variability values of 17–22% (SUV max ), 11–19% (SUV 50 ) and 8–10% (SUV mean ) when muscular uptake variations and statistical noise were included. Meanwhile, quantification variability due only to statistical noise was 7–8% (SUV max ), 3–7% (SUV 50 ) and 1–2% (SUV mean ) for large tumors (>20 mm) and 13% (SUV max ), 16% (SUV 50 ) and 8% (SUV mean ) for small tumors (<10 mm), thus showing that the variability in tumor quantification is mainly affected by muscular uptake variations when large enough tumors are considered. In addition, our results showed that quantification variability is strongly dominated by statistical noise when the injected dose decreases below 222 MBq. Conclusions: Our study revealed that muscular uptake variations between patients who are totally relaxed should be considered as an uncertainty source of tumor quantification values. - Highlights: • Distribution of muscular uptake from 136 PET

  16. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    Science.gov (United States)

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    Science.gov (United States)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  18. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  19. Halo statistics analysis within medium volume cosmological N-body simulation

    Directory of Open Access Journals (Sweden)

    Martinović N.

    2015-01-01

    Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations

  20. Statistical Analysis of Detailed 3-D CFD LES Simulations with Regard to CCV Modeling

    Directory of Open Access Journals (Sweden)

    Vítek Oldřich

    2016-06-01

    Full Text Available The paper deals with statistical analysis of large amount of detailed 3-D CFD data in terms of cycle-to-cycle variations (CCVs. These data were obtained by means of LES calculations of many consecutive cycles. Due to non-linear nature of Navier-Stokes equation set, there is a relatively significant CCV. Hence, every cycle is slightly different – this leads to requirement to perform statistical analysis based on ensemble averaging procedure which enables better understanding of CCV in ICE including its quantification. The data obtained from the averaging procedure provides results on different space resolution levels. The procedure is applied locally, i.e., in every cell of the mesh. Hence there is detailed CCV information on local level – such information can be compared with RANS simulations. Next, volume/mass averaging provides information at specific locations – e.g., gap between electrodes of a spark plug. Finally, volume/mass averaging of the whole combustion chamber leads to global information which can be compared with experimental data or results of system simulation tools (which are based on 0-D/1-D approach.

  1. Long-Term Propagation Statistics and Availability Performance Assessment for Simulated Terrestrial Hybrid FSO/RF System

    Directory of Open Access Journals (Sweden)

    Fiser Ondrej

    2011-01-01

    Full Text Available Long-term monthly and annual statistics of the attenuation of electromagnetic waves that have been obtained from 6 years of measurements on a free space optical path, 853 meters long, with a wavelength of 850 nm and on a precisely parallel radio path with a frequency of 58 GHz are presented. All the attenuation events observed are systematically classified according to the hydrometeor type causing the particular event. Monthly and yearly propagation statistics on the free space optical path and radio path are obtained. The influence of individual hydrometeors on attenuation is analysed. The obtained propagation statistics are compared to the calculated statistics using ITU-R models. The calculated attenuation statistics both at 850 nm and 58 GHz underestimate the measured statistics for higher attenuation levels. The availability performance of a simulated hybrid FSO/RF system is analysed based on the measured data.

  2. Physics-based statistical model and simulation method of RF propagation in urban environments

    Science.gov (United States)

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  3. Statistical analysis of global surface temperature and sea level using cointegration methods

    DEFF Research Database (Denmark)

    Schmidt, Torben; Johansen, Søren; Thejll, Peter

    2012-01-01

    Global sea levels are rising which is widely understood as a consequence of thermal expansion and melting of glaciers and land-based ice caps. Due to the lack of representation of ice-sheet dynamics in present-day physically-based climate models being unable to simulate observed sea level trends......, semi-empirical models have been applied as an alternative for projecting of future sea levels. There is in this, however, potential pitfalls due to the trending nature of the time series. We apply a statistical method called cointegration analysis to observed global sea level and land-ocean surface air...... temperature, capable of handling such peculiarities. We find a relationship between sea level and temperature and find that temperature causally depends on the sea level, which can be understood as a consequence of the large heat capacity of the ocean. We further find that the warming episode in the 1940s...

  4. Visualizing and Understanding Probability and Statistics: Graphical Simulations Using Excel

    Science.gov (United States)

    Gordon, Sheldon P.; Gordon, Florence S.

    2009-01-01

    The authors describe a collection of dynamic interactive simulations for teaching and learning most of the important ideas and techniques of introductory statistics and probability. The modules cover such topics as randomness, simulations of probability experiments such as coin flipping, dice rolling and general binomial experiments, a simulation…

  5. Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics

    Science.gov (United States)

    Dowding, Irene; Haufe, Stefan

    2018-01-01

    Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885

  6. Perceived Statistical Knowledge Level and Self-Reported Statistical Practice Among Academic Psychologists

    Directory of Open Access Journals (Sweden)

    Laura Badenes-Ribera

    2018-06-01

    Full Text Available Introduction: Publications arguing against the null hypothesis significance testing (NHST procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES, confidence intervals (CIs, and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study.Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years. The mean number of years in the position of professor was 12.90 (SD = 10.21.Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices.Conclusions: Changing statistical practice is not

  7. Introduction to probability and statistics for ecosystem managers simulation and resampling

    CERN Document Server

    Haas, Timothy C

    2013-01-01

    Explores computer-intensive probability and statistics for ecosystem management decision making Simulation is an accessible way to explain probability and stochastic model behavior to beginners. This book introduces probability and statistics to future and practicing ecosystem managers by providing a comprehensive treatment of these two areas. The author presents a self-contained introduction for individuals involved in monitoring, assessing, and managing ecosystems and features intuitive, simulation-based explanations of probabilistic and statistical concepts. Mathematical programming details are provided for estimating ecosystem model parameters with Minimum Distance, a robust and computer-intensive method. The majority of examples illustrate how probability and statistics can be applied to ecosystem management challenges. There are over 50 exercises - making this book suitable for a lecture course in a natural resource and/or wildlife management department, or as the main text in a program of self-stud...

  8. Assessment of statistical education in Indonesia: Preliminary results and initiation to simulation-based inference

    Science.gov (United States)

    Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.

    2018-01-01

    Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.

  9. Monte Carlo simulation in statistical physics an introduction

    CERN Document Server

    Binder, Kurt

    1992-01-01

    The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations

  10. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Science.gov (United States)

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  11. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    Science.gov (United States)

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  12. INCREASE OF QUEUING SYSTEM EFFECTIVENESS OF TRADING ENTERPRISE BY MEANS OF NUMERICAL STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    Knyazheva Yu. V.

    2014-06-01

    Full Text Available The market economy causes need of development of the economic analysis first of all at microlevel, that is at the level of the separate enterprises as the enterprises are basis for market economy. Therefore improvement of the queuing system trading enterprise is an important economic problem. Analytical solutions of problems of the mass servicing are in described the theory, don’t correspond to real operating conditions of the queuing systems. Therefore in this article optimization of customer service process and improvement of settlement and cash service system trading enterprise are made by means of numerical statistical simulation of the queuing system trading enterprise. The article describe integrated statistical numerical simulation model of queuing systems trading enterprise working in nonstationary conditions with reference to different distribution laws of customers input stream. This model takes account of various behavior customers output stream, includes checkout service model which takes account of cashier rate of working, also this model includes staff motivation model, profit earning and profit optimization models that take into account possible revenue and costs. The created statistical numerical simulation model of queuing systems trading enterprise, at its realization in the suitable software environment, allows to perform optimization of the most important parameters of system. And when developing the convenient user interface, this model can be a component of support decision-making system for rationalization of organizational structure and for management optimization by trading enterprise.

  13. A Simulational approach to teaching statistical mechanics and kinetic theory

    International Nuclear Information System (INIS)

    Karabulut, H.

    2005-01-01

    A computer simulation demonstrating how Maxwell-Boltzmann distribution is reached in gases from a nonequilibrium distribution is presented. The algorithm can be generalized to the cases of gas particles (atoms or molecules) with internal degrees of freedom such as electronic excitations and vibrational-rotational energy levels. Another generalization of the algorithm is the case of mixture of two different gases. By choosing the collision cross sections properly one can create quasi equilibrium distributions. For example by choosing same atom cross sections large and different atom cross sections very small one can create mixture of two gases with different temperatures where two gases slowly interact and come to equilibrium in a long time. Similarly, for the case one kind of atom with internal degrees of freedom one can create situations that internal degrees of freedom come to the equilibrium much later than translational degrees of freedom. In all these cases the equilibrium distribution that the algorithm gives is the same as expected from the statistical mechanics. The algorithm can also be extended to cover the case of chemical equilibrium where species A and B react to form AB molecules. The laws of chemical equilibrium can be observed from this simulation. The chemical equilibrium simulation can also help to teach the elusive concept of chemical potential

  14. Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects

    Science.gov (United States)

    Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian; hide

    2015-01-01

    Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex

  15. Characteristics of level-spacing statistics in chaotic graphene billiards.

    Science.gov (United States)

    Huang, Liang; Lai, Ying-Cheng; Grebogi, Celso

    2011-03-01

    A fundamental result in nonrelativistic quantum nonlinear dynamics is that the spectral statistics of quantum systems that possess no geometric symmetry, but whose classical dynamics are chaotic, are described by those of the Gaussian orthogonal ensemble (GOE) or the Gaussian unitary ensemble (GUE), in the presence or absence of time-reversal symmetry, respectively. For massless spin-half particles such as neutrinos in relativistic quantum mechanics in a chaotic billiard, the seminal work of Berry and Mondragon established the GUE nature of the level-spacing statistics, due to the combination of the chirality of Dirac particles and the confinement, which breaks the time-reversal symmetry. A question is whether the GOE or the GUE statistics can be observed in experimentally accessible, relativistic quantum systems. We demonstrate, using graphene confinements in which the quasiparticle motions are governed by the Dirac equation in the low-energy regime, that the level-spacing statistics are persistently those of GOE random matrices. We present extensive numerical evidence obtained from the tight-binding approach and a physical explanation for the GOE statistics. We also find that the presence of a weak magnetic field switches the statistics to those of GUE. For a strong magnetic field, Landau levels become influential, causing the level-spacing distribution to deviate markedly from the random-matrix predictions. Issues addressed also include the effects of a number of realistic factors on level-spacing statistics such as next nearest-neighbor interactions, different lattice orientations, enhanced hopping energy for atoms on the boundary, and staggered potential due to graphene-substrate interactions.

  16. Statistical analysis of global surface air temperature and sea level using cointegration methods

    DEFF Research Database (Denmark)

    Schmith, Torben; Johansen, Søren; Thejll, Peter

    Global sea levels are rising which is widely understood as a consequence of thermal expansion and melting of glaciers and land-based ice caps. Due to physically-based models being unable to simulate observed sea level trends, semi-empirical models have been applied as an alternative for projecting...... of future sea levels. There is in this, however, potential pitfalls due to the trending nature of the time series. We apply a statistical method called cointegration analysis to observed global sea level and surface air temperature, capable of handling such peculiarities. We find a relationship between sea...... level and temperature and find that temperature causally depends on the sea level, which can be understood as a consequence of the large heat capacity of the ocean. We further find that the warming episode in the 1940s is exceptional in the sense that sea level and warming deviates from the expected...

  17. Simulation Experiments in Practice: Statistical Design and Regression Analysis

    OpenAIRE

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...

  18. Information Geometry, Inference Methods and Chaotic Energy Levels Statistics

    OpenAIRE

    Cafaro, Carlo

    2008-01-01

    In this Letter, we propose a novel information-geometric characterization of chaotic (integrable) energy level statistics of a quantum antiferromagnetic Ising spin chain in a tilted (transverse) external magnetic field. Finally, we conjecture our results might find some potential physical applications in quantum energy level statistics.

  19. Shnirelman peak in the level spacing statistics

    International Nuclear Information System (INIS)

    Chirikov, B.V.; Shepelyanskij, D.L.

    1994-01-01

    The first results on the statistical properties of the quantum quasidegeneracy are presented. A physical interpretation of the Shnirelman theorem predicted the bulk quasidegeneracy is given. The conditions for the strong impact of the degeneracy on the quantum level statistics are formulated which allows to extend the application of the Shnirelman theorem into a broad class of quantum systems. 14 refs., 3 figs

  20. Improving Statistics Education through Simulations: The Case of the Sampling Distribution.

    Science.gov (United States)

    Earley, Mark A.

    This paper presents a summary of action research investigating statistics students' understandings of the sampling distribution of the mean. With four sections of an introductory Statistics in Education course (n=98 students), a computer simulation activity (R. delMas, J. Garfield, and B. Chance, 1999) was implemented and evaluated to show…

  1. Statistical analysis of simulation calculation of sputtering for two interaction potentials

    International Nuclear Information System (INIS)

    Shao Qiyun

    1992-01-01

    The effects of the interaction potentials (Moliere potential and Universal potential) are presented on computer simulation results of sputtering via Monte Carlo simulation based on the binary collision approximation. By means of Wilcoxon two-Sample paired sign rank test, the statistically significant difference for the above results is obtained

  2. Atmospheric forcing of decadal Baltic Sea level variability in the last 200 years. A statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Huenicke, B. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Kuestenforschung

    2008-11-06

    This study aims at the estimation of the impact of different atmospheric factors on the past sealevel variations (up to 200 years) in the Baltic Sea by statistically analysing the relationship between Baltic Sea level records and observational and proxy-based reconstructed climatic data sets. The focus lies on the identification and possible quantification of the contribution of sealevel pressure (wind), air-temperature and precipitation to the low-frequency (decadal and multi-decadal) variability of Baltic Sea level. It is known that the wind forcing is the main factor explaining average Baltic Sea level variability at inter-annual to decadal timescales, especially in wintertime. In this thesis it is statistically estimated to what extent other regional climate factors contribute to the spatially heterogeneous Baltic Sea level variations around the isostatic trend at multi-decadal timescales. Although the statistical analysis cannot be completely conclusive, as the potential climate drivers are all statistically interrelated to some degree, the results indicate that precipitation should be taken into account as an explanatory variable for sea-level variations. On the one hand it has been detected that the amplitude of the annual cycle of Baltic Sea level has increased throughout the 20th century and precipitation seems to be the only factor among those analysed (wind through SLP field, barometric effect, temperature and precipitation) that can account for this evolution. On the other hand, precipitation increases the ability to hindcast inter-annual variations of sea level in some regions and seasons, especially in the Southern Baltic in summertime. The mechanism by which precipitation exerts its influence on Baltic Sea level is not ascertained in this statistical analysis due to the lack of long salinity time series. This result, however, represents a working hypothesis that can be confirmed or disproved by long simulations of the Baltic Sea system - ocean

  3. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic

  4. Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuxia [Department of Physics, Wuhan University, Wuhan 430072 (China); Sang Jianping [Department of Physics, Wuhan University, Wuhan 430072 (China); Department of Physics, Jianghan University, Wuhan 430056 (China); Zou Xianwu [Department of Physics, Wuhan University, Wuhan 430072 (China)]. E-mail: xwzou@whu.edu.cn; Jin Zhunzhi [Department of Physics, Wuhan University, Wuhan 430072 (China)

    2005-09-26

    The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices.

  5. Atomic-level computer simulation

    International Nuclear Information System (INIS)

    Adams, J.B.; Rockett, Angus; Kieffer, John; Xu Wei; Nomura, Miki; Kilian, K.A.; Richards, D.F.; Ramprasad, R.

    1994-01-01

    This paper provides a broad overview of the methods of atomic-level computer simulation. It discusses methods of modelling atomic bonding, and computer simulation methods such as energy minimization, molecular dynamics, Monte Carlo, and lattice Monte Carlo. ((orig.))

  6. Monte Carlo Simulation for Statistical Decay of Compound Nucleus

    Directory of Open Access Journals (Sweden)

    Chadwick M.B.

    2012-02-01

    Full Text Available We perform Monte Carlo simulations for neutron and γ-ray emissions from a compound nucleus based on the Hauser-Feshbach statistical theory. This Monte Carlo Hauser-Feshbach (MCHF method calculation, which gives us correlated information between emitted particles and γ-rays. It will be a powerful tool in many applications, as nuclear reactions can be probed in a more microscopic way. We have been developing the MCHF code, CGM, which solves the Hauser-Feshbach theory with the Monte Carlo method. The code includes all the standard models that used in a standard Hauser-Feshbach code, namely the particle transmission generator, the level density module, interface to the discrete level database, and so on. CGM can emit multiple neutrons, as long as the excitation energy of the compound nucleus is larger than the neutron separation energy. The γ-ray competition is always included at each compound decay stage, and the angular momentum and parity are conserved. Some calculations for a fission fragment 140Xe are shown as examples of the MCHF method, and the correlation between the neutron and γ-ray is discussed.

  7. Universality of correlations of levels with discrete statistics

    OpenAIRE

    Brezin, Edouard; Kazakov, Vladimir

    1999-01-01

    We study the statistics of a system of N random levels with integer values, in the presence of a logarithmic repulsive potential of Dyson type. This probleme arises in sums over representations (Young tableaux) of GL(N) in various matrix problems and in the study of statistics of partitions for the permutation group. The model is generalized to include an external source and its correlators are found in closed form for any N. We reproduce the density of levels in the large N and double scalin...

  8. Bias, precision and statistical power of analysis of covariance in the analysis of randomized trials with baseline imbalance: a simulation study

    Science.gov (United States)

    2014-01-01

    Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304

  9. A statistical-dynamical modeling approach for the simulation of local paleo proxy records using GCM output

    Energy Technology Data Exchange (ETDEWEB)

    Reichert, B.K.; Bengtsson, L. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany); Aakesson, O. [Sveriges Meteorologiska och Hydrologiska Inst., Norrkoeping (Sweden)

    1998-08-01

    Recent proxy data obtained from ice core measurements, dendrochronology and valley glaciers provide important information on the evolution of the regional or local climate. General circulation models integrated over a long period of time could help to understand the (external and internal) forcing mechanisms of natural climate variability. For a systematic interpretation of in situ paleo proxy records, a combined method of dynamical and statistical modeling is proposed. Local 'paleo records' can be simulated from GCM output by first undertaking a model-consistent statistical downscaling and then using a process-based forward modeling approach to obtain the behavior of valley glaciers and the growth of trees under specific conditions. The simulated records can be compared to actual proxy records in order to investigate whether e.g. the response of glaciers to climatic change can be reproduced by models and to what extent climate variability obtained from proxy records (with the main focus on the last millennium) can be represented. For statistical downscaling to local weather conditions, a multiple linear forward regression model is used. Daily sets of observed weather station data and various large-scale predictors at 7 pressure levels obtained from ECMWF reanalyses are used for development of the model. Daily data give the closest and most robust relationships due to the strong dependence on individual synoptic-scale patterns. For some local variables, the performance of the model can be further increased by developing seasonal specific statistical relationships. The model is validated using both independent and restricted predictor data sets. The model is applied to a long integration of a mixed layer GCM experiment simulating pre-industrial climate variability. The dynamical-statistical local GCM output within a region around Nigardsbreen glacier, Norway is compared to nearby observed station data for the period 1868-1993. Patterns of observed

  10. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is

  11. Quasiparticle features and level statistics of odd-odd nucleus

    International Nuclear Information System (INIS)

    Cheng Nanpu; Zheng Renrong; Zhu Shunquan

    2001-01-01

    The energy levels of the odd-odd nucleus 84 Y are calculated by using the axially symmetric rotor plus quasiparticles model. The two standard statistical tests of Random-Matrix Theory such as the distribution function p(s) of the nearest-neighbor level spacings (NNS) and the spectral rigidity Δ 3 are used to explore the statistical properties of the energy levels. By analyzing the properties of p(s) and Δ 3 under various conditions, the authors find that the quasiparticle features mainly affect the statistical properties of the odd-odd nucleus 84 Y through the recoil term and the Coriolis force in this theoretical mode, and that the chaotic degree of the energy levels decreases with the decreasing of the Fermi energy and the energy-gap parameters. The effect of the recoil term is small while the Coriolis force plays a major role in the spectral structure of 84 Y

  12. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  13. Testing a statistical method of global mean palotemperature estimations in a long climate simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2001-07-01

    Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)

  14. Teaching Statistical Principles with a Roulette Simulation

    Directory of Open Access Journals (Sweden)

    Graham D Barr

    2013-03-01

    Full Text Available This paper uses the game of roulette in a simulation setting to teach students in an introductory Stats course some basic issues in theoretical and empirical probability. Using an Excel spreadsheet with embedded VBA (Visual Basic for Applications, one can simulate the empirical return and empirical standard deviation for a range of bets in Roulette over some predetermined number of plays. In particular, the paper illustrates the difference between different playing strategies by contrasting a low payout bet (say a bet on “red” and a high payout bet (say a bet on a particular number by considering the expected return and volatility associated with the bets. The paper includes an Excel VBA based simulation of the Roulette wheel where students can make bets and monitor the return on the bets for one play or multiple plays. In addition it includes a simulation of the casino house advantage for repeated multiple plays; that is, it allows students to see how casinos may derive a new certain return equal to the house advantage by entertaining large numbers of bets which will systematically drive the volatility of the house advantage down to zero. This simulation has been shown to be especially effective at theUniversityofCape Townfor teaching first year Statistics students the subtler points of probability, as well as encouraging discussions around the risk-return trade-off facing gamblers. The program has also been shown to be useful for teaching students the principles of theoretical and empirical probabilities as well as an understanding of volatility.

  15. Statistical simulations of machine errors for LINAC4

    CERN Document Server

    Baylac, M.; Froidefond, E.; Sargsyan, E.

    2006-01-01

    LINAC 4 is a normal conducting H- linac proposed at CERN to provide a higher proton flux to the CERN accelerator chain. It should replace the existing LINAC 2 as injector to the Proton Synchrotron Booster and can also operate in the future as the front end of the SPL, a 3.5 GeV Superconductingg Proton Linac. LINAC 4 consists of a Radio-Frequency Quadrupole, a chopper line, a Drift Tube Linac (DTL) and a Cell Coupled DTL all operating at 352 MHz and finally a Side Coupled Linac at 704 MHz. Beam dynamics was studied and optimized performing end-to-end simulations. This paper presents statistical simulations of machine errors which were performed in order to validate the proposed design.

  16. Introduction to statistical physics and to computer simulations

    CERN Document Server

    Casquilho, João Paulo

    2015-01-01

    Rigorous and comprehensive, this textbook introduces undergraduate students to simulation methods in statistical physics. The book covers a number of topics, including the thermodynamics of magnetic and electric systems; the quantum-mechanical basis of magnetism; ferrimagnetism, antiferromagnetism, spin waves and magnons; liquid crystals as a non-ideal system of technological relevance; and diffusion in an external potential. It also covers hot topics such as cosmic microwave background, magnetic cooling and Bose-Einstein condensation. The book provides an elementary introduction to simulation methods through algorithms in pseudocode for random walks, the 2D Ising model, and a model liquid crystal. Any formalism is kept simple and derivations are worked out in detail to ensure the material is accessible to students from subjects other than physics.

  17. Improved score statistics for meta-analysis in single-variant and gene-level association studies.

    Science.gov (United States)

    Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo

    2018-06-01

    Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.

  18. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  19. Simulation and Statistical Inference of Stochastic Reaction Networks with Applications to Epidemic Models

    KAUST Repository

    Moraes, Alvaro

    2015-01-01

    Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference

  20. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  1. Technology for enhancing statistical reasoning at the school level

    NARCIS (Netherlands)

    Biehler, R.; Ben-Zvi, D.; Bakker, A.|info:eu-repo/dai/nl/272605778; Makar, K.

    2013-01-01

    The purpose of this chapter is to provide an updated overview of digital technologies relevant to statistics education, and to summarize what is currently known about how these new technologies can support the development of students’ statistical reasoning at the school level. A brief literature

  2. Simulating Durum Wheat (Triticum turgidum L. Response to Root Zone Salinity based on Statistics and Macroscopic Models

    Directory of Open Access Journals (Sweden)

    Vahid Reza Jalali

    2017-10-01

    Full Text Available Introduction Salinity as an abiotic stress can cause excessive disturbance for seed germination and plant sustainable production. Salinity with three different mechanisms of osmotic potential reduction, ionic toxicity and disturbance of plant nutritional balance, can reduce performance of the final product. Planning for optimal use of available water and saline water with poor quality in agricultural activities is of great importance. Wheat is one of the eight main food sources including rice, corn, sugar beet, cattle, sorghum, millet and cassava which provide 70-90% of all calories and 66-90% of the protein consumed in developing countries. Durum wheat (Triticum turgidum L. is an important crop grows in some arid and semi-arid areas of the world such as Middle East and North Africa. In these regions, in addition to soil salinity, sharp decline in rainfall and a sharp drop in groundwater levels in recent years has emphasized on the efficient use of limited soil and water resources. Consequently, in order to use brackish water for agricultural productions, it is required to analyze its quantitative response to salinity stress by simulation models in those regions. The objective of this study is to assess the capability of statistics and macro-simulation models of yield in saline conditions. Materials and methods In this study, two general approach of simulation includes process-physical models and statistical-experimental models were investigated. For this purpose, in order to quantify the salinity effect on seed relative yield of durum wheat (Behrang Variety at different levels of soil salinity, process-physical models of Maas & Hoffman, van Genuchten & Hoffman, Dirksen et al. and Homaee et al. models were used. Also, statistical-experimental models of Modified Gompertz Function, Bi-Exponential Function and Modified Weibull Function were used too. In order to get closer to real conditions of growth circumstances in saline soils, a natural saline

  3. Why is the Groundwater Level Rising? A Case Study Using HARTT to Simulate Groundwater Level Dynamic.

    Science.gov (United States)

    Yihdego, Yohannes; Danis, Cara; Paffard, Andrew

    2017-12-01

      Groundwater from a shallow unconfined aquifer at a site in coastal New South Wales has been causing recent water logging issues. A trend of rising groundwater level has been anecdotally observed over the last 10 years. It was not clear whether the changes in groundwater levels were solely natural variations within the groundwater system or whether human interference was driving the level up. Time series topographic images revealed significant surrounding land use changes and human modification to the environment of the groundwater catchment. A statistical model utilising HARTT (multiple linear regression hydrograph analysis method) simulated the groundwater level dynamics at five key monitoring locations and successfully showed a trend of rising groundwater level. Utilising hydrogeological input from field investigations, the model successfully simulated the rise in the water table over time to the present day levels, whilst taking into consideration rainfall and land changes. The underlying geological/land conditions were found to be just as significant as the impact of climate variation. The correlation coefficient for the monitoring bores (MB), excluding MB4, show that the groundwater level fluctuation can be explained by the climate variable (rainfall) with the lag time between the atypical rainfall and groundwater level ranging from 4 to 7 months. The low R2 value for MB4 indicates that there are factors missing in the model which are primarily related to human interference. The elevated groundwater levels in the affected area are the result of long term cumulative land use changes, instigated by humans, which have directly resulted in detrimental changes to the groundwater aquifer properties.

  4. A study of statistics anxiety levels of graduate dental hygiene students.

    Science.gov (United States)

    Welch, Paul S; Jacks, Mary E; Smiley, Lynn A; Walden, Carolyn E; Clark, William D; Nguyen, Carol A

    2015-02-01

    In light of increased emphasis on evidence-based practice in the profession of dental hygiene, it is important that today's dental hygienist comprehend statistical measures to fully understand research articles, and thereby apply scientific evidence to practice. Therefore, the purpose of this study was to investigate statistics anxiety among graduate dental hygiene students in the U.S. A web-based self-report, anonymous survey was emailed to directors of 17 MSDH programs in the U.S. with a request to distribute to graduate students. The survey collected data on statistics anxiety, sociodemographic characteristics and evidence-based practice. Statistic anxiety was assessed using the Statistical Anxiety Rating Scale. Study significance level was α=0.05. Only 8 of the 17 invited programs participated in the study. Statistical Anxiety Rating Scale data revealed graduate dental hygiene students experience low to moderate levels of statistics anxiety. Specifically, the level of anxiety on the Interpretation Anxiety factor indicated this population could struggle with making sense of scientific research. A decisive majority (92%) of students indicated statistics is essential for evidence-based practice and should be a required course for all dental hygienists. This study served to identify statistics anxiety in a previously unexplored population. The findings should be useful in both theory building and in practical applications. Furthermore, the results can be used to direct future research. Copyright © 2015 The American Dental Hygienists’ Association.

  5. Statistical cluster analysis and diagnosis of nuclear system level performance

    International Nuclear Information System (INIS)

    Teichmann, T.; Levine, M.M.; Samanta, P.K.; Kato, W.Y.

    1985-01-01

    The complexity of individual nuclear power plants and the importance of maintaining reliable and safe operations makes it desirable to complement the deterministic analyses of these plants by corresponding statistical surveys and diagnoses. Based on such investigations, one can then explore, statistically, the anticipation, prevention, and when necessary, the control of such failures and malfunctions. This paper, and the accompanying one by Samanta et al., describe some of the initial steps in exploring the feasibility of setting up such a program on an integrated and global (industry-wide) basis. The conceptual statistical and data framework was originally outlined in BNL/NUREG-51609, NUREG/CR-3026, and the present work aims at showing how some important elements might be implemented in a practical way (albeit using hypothetical or simulated data)

  6. Parametric Statistics of Individual Energy Levels in Random Hamiltonians

    OpenAIRE

    Smolyarenko, I. E.; Simons, B. D.

    2002-01-01

    We establish a general framework to explore parametric statistics of individual energy levels in disordered and chaotic quantum systems of unitary symmetry. The method is applied to the calculation of the universal intra-level parametric velocity correlation function and the distribution of level shifts under the influence of an arbitrary external perturbation.

  7. Optimal allocation of testing resources for statistical simulations

    Science.gov (United States)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  8. Simulation approaches to probabilistic structural design at the component level

    International Nuclear Information System (INIS)

    Stancampiano, P.A.

    1978-01-01

    In this paper, structural failure of large nuclear components is viewed as a random process with a low probability of occurrence. Therefore, a statistical interpretation of probability does not apply and statistical inferences cannot be made due to the sparcity of actual structural failure data. In such cases, analytical estimates of the failure probabilities may be obtained from stress-strength interference theory. Since the majority of real design applications are complex, numerical methods are required to obtain solutions. Monte Carlo simulation appears to be the best general numerical approach. However, meaningful applications of simulation methods suggest research activities in three categories: methods development, failure mode models development, and statistical data models development. (Auth.)

  9. Algorithm for statistical noise reduction in three-dimensional ion implant simulations

    International Nuclear Information System (INIS)

    Hernandez-Mangas, J.M.; Arias, J.; Jaraiz, M.; Bailon, L.; Barbolla, J.

    2001-01-01

    As integrated circuit devices scale into the deep sub-micron regime, ion implantation will continue to be the primary means of introducing dopant atoms into silicon. Different types of impurity profiles such as ultra-shallow profiles and retrograde profiles are necessary for deep submicron devices in order to realize the desired device performance. A new algorithm to reduce the statistical noise in three-dimensional ion implant simulations both in the lateral and shallow/deep regions of the profile is presented. The computational effort in BCA Monte Carlo ion implant simulation is also reduced

  10. PNS and statistical experiments simulation in subcritical systems using Monte-Carlo method on example of Yalina-Thermal assembly

    International Nuclear Information System (INIS)

    Sadovich, S.; Burnos, V.; Kiyavitskaya, H.; Fokov, Y.; Talamo, A.

    2013-01-01

    In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source (PNS) and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulating PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into the simulation. The methods described above have been tested on the sub-critical assembly Yalina-Thermal, located in the Joint Institute for Power and Nuclear Research SOSNY in Minsk (Belarus). A good agreement between experiment and simulation was shown. (authors)

  11. Usage of link-level performance indicators for HSDPA network-level simulations in E-UMTS

    NARCIS (Netherlands)

    Brouwer, Frank; de Bruin, I.C.C.; Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Correia, Américo

    2004-01-01

    The paper describes integration of HSDPA (high-speed downlink packet access) link-level simulation results into network-level simulations for enhanced UMTS. The link-level simulations model all physical layer features depicted in the 3GPP standards. These include: generation of transport blocks;

  12. Comparing Student Success and Understanding in Introductory Statistics under Consensus and Simulation-Based Curricula

    Science.gov (United States)

    Hldreth, Laura A.; Robison-Cox, Jim; Schmidt, Jade

    2018-01-01

    This study examines the transferability of results from previous studies of simulation-based curriculum in introductory statistics using data from 3,500 students enrolled in an introductory statistics course at Montana State University from fall 2013 through spring 2016. During this time, four different curricula, a traditional curriculum and…

  13. A New Approach to Monte Carlo Simulations in Statistical Physics

    Science.gov (United States)

    Landau, David P.

    2002-08-01

    Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  14. Statistical analysis for discrimination of prompt gamma ray peak induced by high energy neutron: Monte Carlo simulation study

    International Nuclear Information System (INIS)

    Do-Kun Yoon; Joo-Young Jung; Tae Suk Suh; Seong-Min Han

    2015-01-01

    The purpose of this research is a statistical analysis for discrimination of prompt gamma ray peak induced by the 14.1 MeV neutron particles from spectra using Monte Carlo simulation. For the simulation, the information of 18 detector materials was used to simulate spectra by the neutron capture reaction. The discrimination of nine prompt gamma ray peaks from the simulation of each detector material was performed. We presented the several comparison indexes of energy resolution performance depending on the detector material using the simulation and statistics for the prompt gamma activation analysis. (author)

  15. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Science.gov (United States)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  16. Simulation of decreasing reactor power level with BWR simulator

    International Nuclear Information System (INIS)

    Suwoto; Zuhair; Rivai, Abu Khalid

    2002-01-01

    Study on characteristic of BWR using Desktop PC Based Simulator Program was analysed. This simulator is more efficient and cheaper for analyzing of characteristic and dynamic respond than full scope simulator for decreasing power level of BW. Dynamic responses of BWR reactor was investigated during the power level reduction from 100% FP (Full Power) which is 3926 MWth to 0% FP with 25% steps and 1 % FP/sec rate. The overall results for core flow rate, reactor steam flow, feed-water flow and turbine-generator power show tendency proportional to reduction of reactor power. This results show that reactor power control in BWR could be done by control of re-circulation flow that alter the density of water used as coolant and moderator. Decreasing the re-circulation flow rate will decrease void density which has negative reactivity and also affect the position of control rods

  17. Statistical selection of tide gauges for Arctic sea-level reconstruction

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2015-01-01

    In this paper, we seek an appropriate selection of tide gauges for Arctic Ocean sea-level reconstruction based on a combination of empirical criteria and statistical properties (leverages). Tide gauges provide the only in situ observations of sea level prior to the altimetry era. However, tide...... the "influence" of each Arctic tide gauge on the EOF-based reconstruction through the use of statistical leverage and use this as an indication in selecting appropriate tide gauges, in order to procedurally identify poor-quality data while still including as much data as possible. To accommodate sparse...

  18. Low-Level Radioactive Waste siting simulation information package

    International Nuclear Information System (INIS)

    1985-12-01

    The Department of Energy's National Low-Level Radioactive Waste Management Program has developed a simulation exercise designed to facilitate the process of siting and licensing disposal facilities for low-level radioactive waste. The siting simulation can be conducted at a workshop or conference, can involve 14-70 participants (or more), and requires approximately eight hours to complete. The exercise is available for use by states, regional compacts, or other organizations for use as part of the planning process for low-level waste disposal facilities. This information package describes the development, content, and use of the Low-Level Radioactive Waste Siting Simulation. Information is provided on how to organize a workshop for conducting the simulation. 1 ref., 1 fig

  19. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    Science.gov (United States)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  20. Random matrix theory of the energy-level statistics of disordered systems at the Anderson transition

    Energy Technology Data Exchange (ETDEWEB)

    Canali, C M

    1995-09-01

    We consider a family of random matrix ensembles (RME) invariant under similarity transformations and described by the probability density P(H) exp[-TrV(H)]. Dyson`s mean field theory (MFT) of the corresponding plasma model of eigenvalues is generalized to the case of weak confining potential, V(is an element of) {approx} A/2 ln{sup 2}(is an element of). The eigenvalue statistics derived from MFT are shown to deviate substantially from the classical Wigner-Dyson statistics when A < 1. By performing systematic Monte Carlo simulations on the plasma model, we compute all the relevant statistical properties of the RME with weak confinement. For A{sub c} approx. 0.4 the distribution function of the level spacings (LSDF) coincides in a large energy window with the energy LSDF of the three dimensional Anderson model at the metal-insulator transition. For the same A = A{sub c}, the RME eigenvalue-number variance is linear and its slope is equal to 0.32 {+-} 0.02, which is consistent with the value found for the Anderson model at the critical point. (author). 51 refs, 10 figs.

  1. Statistical analysis of the acceleration of Baltic mean sea-level rise, 1900-2012

    Directory of Open Access Journals (Sweden)

    Birgit Hünicke

    2016-07-01

    Full Text Available We analyse annual mean sea-level records from tide-gauges located in the Baltic and parts of the North Sea with the aim of detecting an acceleration of sea-level rise over the 20textsuperscript{th} and 21textsuperscript{st} centuries. The acceleration is estimated as a (1 fit to a polynomial of order two in time, (2 a long-term linear increase in the rates computed over gliding overlapping decadal time segments, and (3 a long-term increase of the annual increments of sea level.The estimation methods (1 and (2 prove to be more powerful in detecting acceleration when tested with sea-level records produced in global climate model simulations. These methods applied to the Baltic-Sea tide-gauges are, however, not powerful enough to detect a significant acceleration in most of individual records, although most estimated accelerations are positive. This lack of detection of statistically significant acceleration at the individual tide-gauge level can be due to the high-level of local noise and not necessarily to the absence of acceleration.The estimated accelerations tend to be stronger in the north and east of the Baltic Sea. Two hypothesis to explain this spatial pattern have been explored. One is that this pattern reflects the slow-down of the Glacial Isostatic Adjustment. However, a simple estimation of this effect suggests that this slow-down cannot explain the estimated acceleration. The second hypothesis is related to the diminishing sea-ice cover over the 20textsuperscript{th} century. The melting o of less saline and colder sea-ice can lead to changes in sea-level. Also, the melting of sea-ice can reduce the number of missing values in the tide-gauge records in winter, potentially influencing the estimated trends and acceleration of seasonal mean sea-level This hypothesis cannot be ascertained either since the spatial pattern of acceleration computed for winter and summer separately are very similar. The all-station-average-record displays an

  2. Microarray data and gene expression statistics for Saccharomyces cerevisiae exposed to simulated asbestos mine drainage

    Directory of Open Access Journals (Sweden)

    Heather E. Driscoll

    2017-08-01

    Full Text Available Here we describe microarray expression data (raw and normalized, experimental metadata, and gene-level data with expression statistics from Saccharomyces cerevisiae exposed to simulated asbestos mine drainage from the Vermont Asbestos Group (VAG Mine on Belvidere Mountain in northern Vermont, USA. For nearly 100 years (between the late 1890s and 1993, chrysotile asbestos fibers were extracted from serpentinized ultramafic rock at the VAG Mine for use in construction and manufacturing industries. Studies have shown that water courses and streambeds nearby have become contaminated with asbestos mine tailings runoff, including elevated levels of magnesium, nickel, chromium, and arsenic, elevated pH, and chrysotile asbestos-laden mine tailings, due to leaching and gradual erosion of massive piles of mine waste covering approximately 9 km2. We exposed yeast to simulated VAG Mine tailings leachate to help gain insight on how eukaryotic cells exposed to VAG Mine drainage may respond in the mine environment. Affymetrix GeneChip® Yeast Genome 2.0 Arrays were utilized to assess gene expression after 24-h exposure to simulated VAG Mine tailings runoff. The chemistry of mine-tailings leachate, mine-tailings leachate plus yeast extract peptone dextrose media, and control yeast extract peptone dextrose media is also reported. To our knowledge this is the first dataset to assess global gene expression patterns in a eukaryotic model system simulating asbestos mine tailings runoff exposure. Raw and normalized gene expression data are accessible through the National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO Database Series GSE89875 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE89875.

  3. Relation between Statistics of Radiowave Reception at South Pole Station and Auroral Oval Characteristics: Data and Monte Carlo Simulations

    Science.gov (United States)

    Labelle, J.; Noonan, K.

    2006-12-01

    Despite their remote location, radio receivers at South Pole Station regularly detect AM broadcast band signals propagating from transmitters thousands of kilometers away. Statistical analysis of received radiowave power at South Pole during 2004 and 2005, integrated over the frequency range of AM broadcast stations, reveals a distinctive time-of-day (UT) dependence: a broad maximum in received power centered at 1500 UT corresponds to magnetic daytime; signal levels are lower during magnetic nighttime except for a calculated based on two contributions: daytime D-region absorption and auroral absorption. The latter varies with day of year and magnetic local time in a complex fashion due to the asymmetric shape and varying size of the auroral oval and the offset of South Pole from the geomagnetic pole. The Monte Carlo simulations confirm that the enhanced absorption of AM broadcast signals during magnetic nighttime results from auroral absorption. Furthermore, the simulations predict that a weak (<0.5 dB) peak near magnetic midnight, similar to that observed in the data, arises from including in the statistical data base intervals when the auroral oval is contracted. These results suggest that ground based radio observations at a sufficiently remote high-latitude site such as South Pole may effectively monitor auroral oval characteristics on a statistical basis at least.

  4. A study of the feasibility of statistical analysis of airport performance simulation

    Science.gov (United States)

    Myers, R. H.

    1982-01-01

    The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.

  5. Detecting rater bias using a person-fit statistic: a Monte Carlo simulation study.

    Science.gov (United States)

    Aubin, André-Sébastien; St-Onge, Christina; Renaud, Jean-Sébastien

    2018-04-01

    With the Standards voicing concern for the appropriateness of response processes, we need to explore strategies that would allow us to identify inappropriate rater response processes. Although certain statistics can be used to help detect rater bias, their use is complicated by either a lack of data about their actual power to detect rater bias or the difficulty related to their application in the context of health professions education. This exploratory study aimed to establish the worthiness of pursuing the use of l z to detect rater bias. We conducted a Monte Carlo simulation study to investigate the power of a specific detection statistic, that is: the standardized likelihood l z person-fit statistics (PFS). Our primary outcome was the detection rate of biased raters, namely: raters whom we manipulated into being either stringent (giving lower scores) or lenient (giving higher scores), using the l z statistic while controlling for the number of biased raters in a sample (6 levels) and the rate of bias per rater (6 levels). Overall, stringent raters (M = 0.84, SD = 0.23) were easier to detect than lenient raters (M = 0.31, SD = 0.28). More biased raters were easier to detect then less biased raters (60% bias: 62, SD = 0.37; 10% bias: 43, SD = 0.36). The PFS l z seems to offer an interesting potential to identify biased raters. We observed detection rates as high as 90% for stringent raters, for whom we manipulated more than half their checklist. Although we observed very interesting results, we cannot generalize these results to the use of PFS with estimated item/station parameters or real data. Such studies should be conducted to assess the feasibility of using PFS to identify rater bias.

  6. The use of statistics in real and simulated investigations performed by undergraduate health sciences' students

    OpenAIRE

    Pimenta, Rui; Nascimento, Ana; Vieira, Margarida; Costa, Elísio

    2010-01-01

    In previous works, we evaluated the statistical reasoning ability acquired by health sciences’ students carrying out their final undergraduate project. We found that these students achieved a good level of statistical literacy and reasoning in descriptive statistics. However, concerning inferential statistics the students did not reach a similar level. Statistics educators therefore claim for more effective ways to learn statistics such as project based investigations. These can be simulat...

  7. Statistical Methods for Unusual Count Data

    DEFF Research Database (Denmark)

    Guthrie, Katherine A.; Gammill, Hilary S.; Kamper-Jørgensen, Mads

    2016-01-01

    microchimerism data present challenges for statistical analysis, including a skewed distribution, excess zero values, and occasional large values. Methods for comparing microchimerism levels across groups while controlling for covariates are not well established. We compared statistical models for quantitative...... microchimerism values, applied to simulated data sets and 2 observed data sets, to make recommendations for analytic practice. Modeling the level of quantitative microchimerism as a rate via Poisson or negative binomial model with the rate of detection defined as a count of microchimerism genome equivalents per...

  8. Hidden Statistics Approach to Quantum Simulations

    Science.gov (United States)

    Zak, Michail

    2010-01-01

    Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the

  9. Siting simulation for low-level waste disposal facilities

    International Nuclear Information System (INIS)

    Roop, R.D.; Rope, R.C.

    1985-01-01

    The Mock Site Licensing Demonstration Project has developed the Low-Level Radioactive Waste Siting Simulation, a role-playing exercise designed to facilitate the process of siting and licensing disposal facilities for low-level waste (LLW). This paper describes the development, content, and usefulness of the siting simulation. The simulation can be conducted at a workshop or conference, involves 14 or more participants, and requires about eight hours to complete. The simulation consists of two sessions; in the first, participants negotiate the selection of siting criteria, and in the second, a preferred disposal site is chosen from three candidate sites. The project has sponsored two workshops (in Boston, Massachusetts and Richmond, Virginia) in which the simulation has been conducted for persons concerned with LLW management issues. It is concluded that the simulation can be valuable as a tool for disseminating information about LLW management; a vehicle that can foster communication; and a step toward consensus building and conflict resolution. The DOE National Low-Level Waste Management Program is now making the siting simulation available for use by states, regional compacts, and other organizations involved in development of LLW disposal facilities

  10. Statistical framework for evaluation of climate model simulations by use of climate proxy data from the last millennium - Part 1: Theory

    Science.gov (United States)

    Sundberg, R.; Moberg, A.; Hind, A.

    2012-08-01

    A statistical framework for comparing the output of ensemble simulations from global climate models with networks of climate proxy and instrumental records has been developed, focusing on near-surface temperatures for the last millennium. This framework includes the formulation of a joint statistical model for proxy data, instrumental data and simulation data, which is used to optimize a quadratic distance measure for ranking climate model simulations. An essential underlying assumption is that the simulations and the proxy/instrumental series have a shared component of variability that is due to temporal changes in external forcing, such as volcanic aerosol load, solar irradiance or greenhouse gas concentrations. Two statistical tests have been formulated. Firstly, a preliminary test establishes whether a significant temporal correlation exists between instrumental/proxy and simulation data. Secondly, the distance measure is expressed in the form of a test statistic of whether a forced simulation is closer to the instrumental/proxy series than unforced simulations. The proposed framework allows any number of proxy locations to be used jointly, with different seasons, record lengths and statistical precision. The goal is to objectively rank several competing climate model simulations (e.g. with alternative model parameterizations or alternative forcing histories) by means of their goodness of fit to the unobservable true past climate variations, as estimated from noisy proxy data and instrumental observations.

  11. Statistical Methods for Assessments in Simulations and Serious Games. Research Report. ETS RR-14-12

    Science.gov (United States)

    Fu, Jianbin; Zapata, Diego; Mavronikolas, Elia

    2014-01-01

    Simulation or game-based assessments produce outcome data and process data. In this article, some statistical models that can potentially be used to analyze data from simulation or game-based assessments are introduced. Specifically, cognitive diagnostic models that can be used to estimate latent skills from outcome data so as to scale these…

  12. Statistical properties of dynamical systems – Simulation and abstract computation

    International Nuclear Information System (INIS)

    Galatolo, Stefano; Hoyrup, Mathieu; Rojas, Cristóbal

    2012-01-01

    Highlights: ► A survey on results about computation and computability on the statistical properties of dynamical systems. ► Computability and non-computability results for invariant measures. ► A short proof for the computability of the convergence speed of ergodic averages. ► A kind of “constructive” version of the pointwise ergodic theorem. - Abstract: We survey an area of recent development, relating dynamics to theoretical computer science. We discuss some aspects of the theoretical simulation and computation of the long term behavior of dynamical systems. We will focus on the statistical limiting behavior and invariant measures. We present a general method allowing the algorithmic approximation at any given accuracy of invariant measures. The method can be applied in many interesting cases, as we shall explain. On the other hand, we exhibit some examples where the algorithmic approximation of invariant measures is not possible. We also explain how it is possible to compute the speed of convergence of ergodic averages (when the system is known exactly) and how this entails the computation of arbitrarily good approximations of points of the space having typical statistical behaviour (a sort of constructive version of the pointwise ergodic theorem).

  13. Research on cloud background infrared radiation simulation based on fractal and statistical data

    Science.gov (United States)

    Liu, Xingrun; Xu, Qingshan; Li, Xia; Wu, Kaifeng; Dong, Yanbing

    2018-02-01

    Cloud is an important natural phenomenon, and its radiation causes serious interference to infrared detector. Based on fractal and statistical data, a method is proposed to realize cloud background simulation, and cloud infrared radiation data field is assigned using satellite radiation data of cloud. A cloud infrared radiation simulation model is established using matlab, and it can generate cloud background infrared images for different cloud types (low cloud, middle cloud, and high cloud) in different months, bands and sensor zenith angles.

  14. Confidence Level Computation for Combining Searches with Small Statistics

    OpenAIRE

    Junk, Thomas

    1999-01-01

    This article describes an efficient procedure for computing approximate confidence levels for searches for new particles where the expected signal and background levels are small enough to require the use of Poisson statistics. The results of many independent searches for the same particle may be combined easily, regardless of the discriminating variables which may be measured for the candidate events. The effects of systematic uncertainty in the signal and background models are incorporated ...

  15. Predicting Statistical Distributions of Footbridge Vibrations

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2009-01-01

    The paper considers vibration response of footbridges to pedestrian loading. Employing Newmark and Monte Carlo simulation methods, a statistical distribution of bridge vibration levels is calculated modelling walking parameters such as step frequency and stride length as random variables...

  16. Single photon laser altimeter simulator and statistical signal processing

    Science.gov (United States)

    Vacek, Michael; Prochazka, Ivan

    2013-05-01

    Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.

  17. The Longitudinal Study of Computer Simulation in Learning Statistics for Hospitality College Students

    Science.gov (United States)

    Huang, Ching-Hsu

    2014-01-01

    The class quasi-experiment was conducted to determine whether using computer simulation teaching strategy enhanced student understanding of statistics concepts for students enrolled in an introductory course. One hundred and ninety-three sophomores in hospitality management department were invited as participants in this two-year longitudinal…

  18. Thermo-dynamical contours of electronic-vibrational spectra simulated using the statistical quantum-mechanical methods

    DEFF Research Database (Denmark)

    Pomogaev, Vladimir; Pomogaeva, Anna; Avramov, Pavel

    2011-01-01

    Three polycyclic organic molecules in various solvents focused on thermo-dynamical aspects were theoretically investigated using the recently developed statistical quantum mechanical/classical molecular dynamics method for simulating electronic-vibrational spectra. The absorption bands of estradiol...

  19. Applying Statistical Design to Control the Risk of Over-Design with Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Yi Wu

    2010-02-01

    Full Text Available By comparing a hard real-time system and a soft real-time system, this article elicits the risk of over-design in soft real-time system designing. To deal with this risk, a novel concept of statistical design is proposed. The statistical design is the process accurately accounting for and mitigating the effects of variation in part geometry and other environmental conditions, while at the same time optimizing a target performance factor. However, statistical design can be a very difficult and complex task when using clas-sical mathematical methods. Thus, a simulation methodology to optimize the design is proposed in order to bridge the gap between real-time analysis and optimization for robust and reliable system design.

  20. Nuclear and Particle Physics Simulations: The Consortium of Upper-Level Physics Software

    Science.gov (United States)

    Bigelow, Roberta; Moloney, Michael J.; Philpott, John; Rothberg, Joseph

    1995-06-01

    The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.

  1. Comparing simulated and theoretical sampling distributions of the U3 person-fit statistic

    NARCIS (Netherlands)

    Emons, W.H.M.; Meijer, R.R.; Sijtsma, K.

    2002-01-01

    The accuracy with which the theoretical sampling distribution of van der Flier's person-.t statistic U3 approaches the empirical U3 sampling distribution is affected by the item discrimination. A simulation study showed that for tests with a moderate or a strong mean item discrimination, the Type I

  2. A neighborhood statistics model for predicting stream pathogen indicator levels.

    Science.gov (United States)

    Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S

    2015-03-01

    Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.

  3. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Science.gov (United States)

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  4. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  5. Evaluating Computer-Based Simulations, Multimedia and Animations that Help Integrate Blended Learning with Lectures in First Year Statistics

    Science.gov (United States)

    Neumann, David L.; Neumann, Michelle M.; Hood, Michelle

    2011-01-01

    The discipline of statistics seems well suited to the integration of technology in a lecture as a means to enhance student learning and engagement. Technology can be used to simulate statistical concepts, create interactive learning exercises, and illustrate real world applications of statistics. The present study aimed to better understand the…

  6. Numerical simulation of swirling flow in complex hydroturbine draft tube using unsteady statistical turbulence models

    Energy Technology Data Exchange (ETDEWEB)

    Paik, Joongcheol [University of Minnesota; Sotiropoulos, Fotis [University of Minnesota; Sale, Michael J [ORNL

    2005-06-01

    A numerical method is developed for carrying out unsteady Reynolds-averaged Navier-Stokes (URANS) simulations and detached-eddy simulations (DESs) in complex 3D geometries. The method is applied to simulate incompressible swirling flow in a typical hydroturbine draft tube, which consists of a strongly curved 90 degree elbow and two piers. The governing equations are solved with a second-order-accurate, finite-volume, dual-time-stepping artificial compressibility approach for a Reynolds number of 1.1 million on a mesh with 1.8 million nodes. The geometrical complexities of the draft tube are handled using domain decomposition with overset (chimera) grids. Numerical simulations show that unsteady statistical turbulence models can capture very complex 3D flow phenomena dominated by geometry-induced, large-scale instabilities and unsteady coherent structures such as the onset of vortex breakdown and the formation of the unsteady rope vortex downstream of the turbine runner. Both URANS and DES appear to yield the general shape and magnitude of mean velocity profiles in reasonable agreement with measurements. Significant discrepancies among the DES and URANS predictions of the turbulence statistics are also observed in the straight downstream diffuser.

  7. Comparing simulated and theoretical sampling distributions of the U3 person-fit statistic

    NARCIS (Netherlands)

    Emons, Wilco H.M.; Meijer, R.R.; Sijtsma, Klaas

    2002-01-01

    The accuracy with which the theoretical sampling distribution of van der Flier’s person-fit statistic U3 approaches the empirical U3 sampling distribution is affected by the item discrimination. A simulation study showed that for tests with a moderate or a strong mean item discrimination, the Type I

  8. Drought episodes over Greece as simulated by dynamical and statistical downscaling approaches

    Science.gov (United States)

    Anagnostopoulou, Christina

    2017-07-01

    Drought over the Greek region is characterized by a strong seasonal cycle and large spatial variability. Dry spells longer than 10 consecutive days mainly characterize the duration and the intensity of Greek drought. Moreover, an increasing trend of the frequency of drought episodes has been observed, especially during the last 20 years of the 20th century. Moreover, the most recent regional circulation models (RCMs) present discrepancies compared to observed precipitation, while they are able to reproduce the main patterns of atmospheric circulation. In this study, both a statistical and a dynamical downscaling approach are used to quantify drought episodes over Greece by simulating the Standardized Precipitation Index (SPI) for different time steps (3, 6, and 12 months). A statistical downscaling technique based on artificial neural network is employed for the estimation of SPI over Greece, while this drought index is also estimated using the RCM precipitation for the time period of 1961-1990. Overall, it was found that the drought characteristics (intensity, duration, and spatial extent) were well reproduced by the regional climate models for long term drought indices (SPI12) while ANN simulations are better for the short-term drought indices (SPI3).

  9. Statistical and thermal physics with computer applications

    CERN Document Server

    Gould, Harvey

    2010-01-01

    This textbook carefully develops the main ideas and techniques of statistical and thermal physics and is intended for upper-level undergraduate courses. The authors each have more than thirty years' experience in teaching, curriculum development, and research in statistical and computational physics. Statistical and Thermal Physics begins with a qualitative discussion of the relation between the macroscopic and microscopic worlds and incorporates computer simulations throughout the book to provide concrete examples of important conceptual ideas. Unlike many contemporary texts on the

  10. Variogram based and Multiple - Point Statistical simulation of shallow aquifer structures in the Upper Salzach valley, Austria

    Science.gov (United States)

    Jandrisevits, Carmen; Marschallinger, Robert

    2014-05-01

    Quarternary sediments in overdeepened alpine valleys and basins in the Eastern Alps bear substantial groundwater resources. The associated aquifer systems are generally geometrically complex with highly variable hydraulic properties. 3D geological models provide predictions of both geometry and properties of the subsurface required for subsequent modelling of groundwater flow and transport. In hydrology, geostatistical Kriging and Kriging based conditional simulations are widely used to predict the spatial distribution of hydrofacies. In the course of investigating the shallow aquifer structures in the Zell basin in the Upper Salzach valley (Salzburg, Austria), a benchmark of available geostatistical modelling and simulation methods was performed: traditional variogram based geostatistical methods, i.e. Indicator Kriging, Sequential Indicator Simulation and Sequential Indicator Co - Simulation were used as well as Multiple Point Statistics. The ~ 6 km2 investigation area is sampled by 56 drillings with depths of 5 to 50 m; in addition, there are 2 geophysical sections with lengths of 2 km and depths of 50 m. Due to clustered drilling sites, indicator Kriging models failed to consistently model the spatial variability of hydrofacies. Using classical variogram based geostatistical simulation (SIS), equally probable realizations were generated with differences among the realizations providing an uncertainty measure. The yielded models are unstructured from a geological point - they do not portray the shapes and lateral extensions of associated sedimentary units. Since variograms consider only two - point spatial correlations, they are unable to capture the spatial variability of complex geological structures. The Multiple Point Statistics approach overcomes these limitations of two point statistics as it uses a Training image instead of variograms. The 3D Training Image can be seen as a reference facies model where geological knowledge about depositional

  11. Development of a two-level modular simulation tool for dysim

    International Nuclear Information System (INIS)

    Kofoed, J.E.

    1987-07-01

    A simulation tool to assist the user when constructing continuous simulation models is described. The simulation tool can be used for constructing simulation programmes that are executed with the rutime executive DYSIM86 which applied a modular approach. This approach makes it possible to split a model into several modules. The simulation tool introduces one more level of modularity. In this level a module is constructed from submodules taken from a library. A submodule consists of a submodel for a component in the complete model. The simulation tool consists of two precompilers working on the two different levels of modularity. The library is completely open to the user so that it is possible to extend it. This is done by a routine which is also part of the simulation tool. The simulation tool is demonstrated by simulating a part of a power plant and a part of a sugar factory. This illustrates that the precompilers can be used for simulating different types of process plants. 69 ill., 13 tabs., 41 refs. (author)

  12. Learning Object Names at Different Hierarchical Levels Using Cross-Situational Statistics.

    Science.gov (United States)

    Chen, Chi-Hsin; Zhang, Yayun; Yu, Chen

    2018-05-01

    Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input. Copyright © 2017 Cognitive Science Society, Inc.

  13. On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics

    Directory of Open Access Journals (Sweden)

    Marco Aldinucci

    2014-01-01

    Full Text Available The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  14. On designing multicore-aware simulators for systems biology endowed with OnLine statistics.

    Science.gov (United States)

    Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  15. Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method

    International Nuclear Information System (INIS)

    Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum

    2011-01-01

    In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used

  16. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  17. Designing Solutions by a Student Centred Approach: Integration of Chemical Process Simulation with Statistical Tools to Improve Distillation Systems

    Directory of Open Access Journals (Sweden)

    Isabel M. Joao

    2017-09-01

    Full Text Available Projects thematically focused on simulation and statistical techniques for designing and optimizing chemical processes can be helpful in chemical engineering education in order to meet the needs of engineers. We argue for the relevance of the projects to improve a student centred approach and boost higher order thinking skills. This paper addresses the use of Aspen HYSYS by Portuguese chemical engineering master students to model distillation systems together with statistical experimental design techniques in order to optimize the systems highlighting the value of applying problem specific knowledge, simulation tools and sound statistical techniques. The paper summarizes the work developed by the students in order to model steady-state processes, dynamic processes and optimize the distillation systems emphasizing the benefits of the simulation tools and statistical techniques in helping the students learn how to learn. Students strengthened their domain specific knowledge and became motivated to rethink and improve chemical processes in their future chemical engineering profession. We discuss the main advantages of the methodology from the students’ and teachers perspective

  18. Statistical methods for elimination of guarantee-time bias in cohort studies: a simulation study

    Directory of Open Access Journals (Sweden)

    In Sung Cho

    2017-08-01

    Full Text Available Abstract Background Aspirin has been considered to be beneficial in preventing cardiovascular diseases and cancer. Several pharmaco-epidemiology cohort studies have shown protective effects of aspirin on diseases using various statistical methods, with the Cox regression model being the most commonly used approach. However, there are some inherent limitations to the conventional Cox regression approach such as guarantee-time bias, resulting in an overestimation of the drug effect. To overcome such limitations, alternative approaches, such as the time-dependent Cox model and landmark methods have been proposed. This study aimed to compare the performance of three methods: Cox regression, time-dependent Cox model and landmark method with different landmark times in order to address the problem of guarantee-time bias. Methods Through statistical modeling and simulation studies, the performance of the above three methods were assessed in terms of type I error, bias, power, and mean squared error (MSE. In addition, the three statistical approaches were applied to a real data example from the Korean National Health Insurance Database. Effect of cumulative rosiglitazone dose on the risk of hepatocellular carcinoma was used as an example for illustration. Results In the simulated data, time-dependent Cox regression outperformed the landmark method in terms of bias and mean squared error but the type I error rates were similar. The results from real-data example showed the same patterns as the simulation findings. Conclusions While both time-dependent Cox regression model and landmark analysis are useful in resolving the problem of guarantee-time bias, time-dependent Cox regression is the most appropriate method for analyzing cumulative dose effects in pharmaco-epidemiological studies.

  19. Simulations and cosmological inference: A statistical model for power spectra means and covariances

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Knox, Lloyd; Habib, Salman; Heitmann, Katrin; Higdon, David; Nakhleh, Charles

    2008-01-01

    We describe an approximate statistical model for the sample variance distribution of the nonlinear matter power spectrum that can be calibrated from limited numbers of simulations. Our model retains the common assumption of a multivariate normal distribution for the power spectrum band powers but takes full account of the (parameter-dependent) power spectrum covariance. The model is calibrated using an extension of the framework in Habib et al. (2007) to train Gaussian processes for the power spectrum mean and covariance given a set of simulation runs over a hypercube in parameter space. We demonstrate the performance of this machinery by estimating the parameters of a power-law model for the power spectrum. Within this framework, our calibrated sample variance distribution is robust to errors in the estimated covariance and shows rapid convergence of the posterior parameter constraints with the number of training simulations.

  20. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Science.gov (United States)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  1. Statistical gamma transitions in {sup 174}Hf

    Energy Technology Data Exchange (ETDEWEB)

    Farris, L P; Cizewski, J A; Brinkman, M J; Henry, R G; Lee, C S [Rutgers--the State Univ., New Brunswick, NJ (United States); Khoo, T L; Janssens, R V.F.; Moore, E F; Carpenter, M P; Ahmad, I; Lauritsen, T [Argonne National Lab., IL (United States); Kolata, J J; Beard, K B; Ye, B; Garg, U [Notre Dame Univ., IN (United States); Kaplan, M S; Saladin, J X; Winchell, D [Pittsburgh Univ., PA (United States)

    1992-08-01

    The statistical spectrum extracted from the {sup 172}Yb({alpha},2n){sup 174}Hf reaction was fit with Monte Carlo simulations using a modified GDR E1 strength function and several formulations of the level density. (author). 15 refs., 1 tab., 3 figs.

  2. The Vienna LTE-advanced simulators up and downlink, link and system level simulation

    CERN Document Server

    Rupp, Markus; Taranetz, Martin

    2016-01-01

    This book introduces the Vienna Simulator Suite for 3rd-Generation Partnership Project (3GPP)-compatible Long Term Evolution-Advanced (LTE-A) simulators and presents applications to demonstrate their uses for describing, designing, and optimizing wireless cellular LTE-A networks. Part One addresses LTE and LTE-A link level techniques. As there has been high demand for the downlink (DL) simulator, it constitutes the central focus of the majority of the chapters. This part of the book reports on relevant highlights, including single-user (SU), multi-user (MU) and single-input-single-output (SISO) as well as multiple-input-multiple-output (MIMO) transmissions. Furthermore, it summarizes the optimal pilot pattern for high-speed communications as well as different synchronization issues. One chapter is devoted to experiments that show how the link level simulator can provide input to a testbed. This section also uses measurements to present and validate fundamental results on orthogonal frequency division multiple...

  3. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  4. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  5. Comparison of Artificial Neural Networks and ARIMA statistical models in simulations of target wind time series

    Science.gov (United States)

    Kolokythas, Kostantinos; Vasileios, Salamalikis; Athanassios, Argiriou; Kazantzidis, Andreas

    2015-04-01

    The wind is a result of complex interactions of numerous mechanisms taking place in small or large scales, so, the better knowledge of its behavior is essential in a variety of applications, especially in the field of power production coming from wind turbines. In the literature there is a considerable number of models, either physical or statistical ones, dealing with the problem of simulation and prediction of wind speed. Among others, Artificial Neural Networks (ANNs) are widely used for the purpose of wind forecasting and, in the great majority of cases, outperform other conventional statistical models. In this study, a number of ANNs with different architectures, which have been created and applied in a dataset of wind time series, are compared to Auto Regressive Integrated Moving Average (ARIMA) statistical models. The data consist of mean hourly wind speeds coming from a wind farm on a hilly Greek region and cover a period of one year (2013). The main goal is to evaluate the models ability to simulate successfully the wind speed at a significant point (target). Goodness-of-fit statistics are performed for the comparison of the different methods. In general, the ANN showed the best performance in the estimation of wind speed prevailing over the ARIMA models.

  6. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Science.gov (United States)

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and

  7. Scalar energy fluctuations in Large-Eddy Simulation of turbulent flames: Statistical budgets and mesh quality criterion

    Energy Technology Data Exchange (ETDEWEB)

    Vervisch, Luc; Domingo, Pascale; Lodato, Guido [CORIA - CNRS and INSA de Rouen, Technopole du Madrillet, BP 8, 76801 Saint-Etienne-du-Rouvray (France); Veynante, Denis [EM2C - CNRS and Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry (France)

    2010-04-15

    Large-Eddy Simulation (LES) provides space-filtered quantities to compare with measurements, which usually have been obtained using a different filtering operation; hence, numerical and experimental results can be examined side-by-side in a statistical sense only. Instantaneous, space-filtered and statistically time-averaged signals feature different characteristic length-scales, which can be combined in dimensionless ratios. From two canonical manufactured turbulent solutions, a turbulent flame and a passive scalar turbulent mixing layer, the critical values of these ratios under which measured and computed variances (resolved plus sub-grid scale) can be compared without resorting to additional residual terms are first determined. It is shown that actual Direct Numerical Simulation can hardly accommodate a sufficiently large range of length-scales to perform statistical studies of LES filtered reactive scalar-fields energy budget based on sub-grid scale variances; an estimation of the minimum Reynolds number allowing for such DNS studies is given. From these developments, a reliability mesh criterion emerges for scalar LES and scaling for scalar sub-grid scale energy is discussed. (author)

  8. The intermediates take it all: asymptotics of higher criticism statistics and a powerful alternative based on equal local levels.

    Science.gov (United States)

    Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut

    2015-01-01

    The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.

    2010-12-22

    In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to

  10. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    Science.gov (United States)

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  11. Evaluating the Use of Random Distribution Theory to Introduce Statistical Inference Concepts to Business Students

    Science.gov (United States)

    Larwin, Karen H.; Larwin, David A.

    2011-01-01

    Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…

  12. Multidirectional testing of one- and two-level ProDisc-L versus simulated fusions.

    Science.gov (United States)

    Panjabi, Manohar; Henderson, Gweneth; Abjornson, Celeste; Yue, James

    2007-05-20

    An in vitro human cadaveric biomechanical study. To evaluate intervertebral rotation changes due to lumbar ProDisc-L compared with simulated fusion, using follower load and multidirectional testing. Artificial discs, as opposed to the fusions, are thought to decrease the long-term accelerated degeneration at adjacent levels. A biomechanical assessment can be helpful, as the long-term clinical evaluation is impractical. Six fresh human cadaveric lumbar specimens (T12-S1) underwent multidirectional testing in flexion-extension, bilateral lateral bending, and bilateral torsion using the Hybrid test method. First, intact specimen total range of rotation (T12-S1) was determined. Second, using pure moments again, this range of rotation was achieved in each of the 5 constructs: A) ProDisc-L at L5-S1; B) fusion at L5-S1; C) ProDisc-L at L4-L5 and fusion at L5-S1; D) ProDisc-L at L4-L5 and L5-S1; and E) 2-level fusion at L4-L5 to L5-S1. Significant changes in the intervertebral rotations due to each construct were determined at the operated and nonoperated levels using repeated measures single factor ANOVA and Bonferroni statistical tests (P < 0.05). Adjacent-level effects (ALEs) were defined as the percentage changes in intervertebral rotations at the nonoperated levels due to the constructs. One- and 2-level ProDisc-L constructs showed only small ALE in any of the 3 rotations. In contrast, 1- and 2-level fusions showed increased ALE in all 3 directions (average, 7.8% and 35.3%, respectively, for 1 and 2 levels). In the disc plus fusion combination (construct C), the ALEs were similar to the 1-level fusion alone. In general, ProDisc-L preserved physiologic motions at all spinal levels, while the fusion simulations resulted in significant ALE.

  13. Statistical and Computational Techniques in Manufacturing

    CERN Document Server

    2012-01-01

    In recent years, interest in developing statistical and computational techniques for applied manufacturing engineering has been increased. Today, due to the great complexity of manufacturing engineering and the high number of parameters used, conventional approaches are no longer sufficient. Therefore, in manufacturing, statistical and computational techniques have achieved several applications, namely, modelling and simulation manufacturing processes, optimization manufacturing parameters, monitoring and control, computer-aided process planning, etc. The present book aims to provide recent information on statistical and computational techniques applied in manufacturing engineering. The content is suitable for final undergraduate engineering courses or as a subject on manufacturing at the postgraduate level. This book serves as a useful reference for academics, statistical and computational science researchers, mechanical, manufacturing and industrial engineers, and professionals in industries related to manu...

  14. Process simulation and statistical approaches for validating waste form qualification models

    International Nuclear Information System (INIS)

    Kuhn, W.L.; Toland, M.R.; Pulsipher, B.A.

    1989-05-01

    This report describes recent progress toward one of the principal objectives of the Nuclear Waste Treatment Program (NWTP) at the Pacific Northwest Laboratory (PNL): to establish relationships between vitrification process control and glass product quality. during testing of a vitrification system, it is important to show that departures affecting the product quality can be sufficiently detected through process measurements to prevent an unacceptable canister from being produced. Meeting this goal is a practical definition of a successful sampling, data analysis, and process control strategy. A simulation model has been developed and preliminarily tested by applying it to approximate operation of the West Valley Demonstration Project (WVDP) vitrification system at West Valley, New York. Multivariate statistical techniques have been identified and described that can be applied to analyze large sets of process measurements. Information on components, tanks, and time is then combined to create a single statistic through which all of the information can be used at once to determine whether the process has shifted away from a normal condition

  15. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    Science.gov (United States)

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  16. The effect of a graphical interpretation of a statistic trend indicator (Trigg's Tracking Variable) on the detection of simulated changes.

    Science.gov (United States)

    Kennedy, R R; Merry, A F

    2011-09-01

    Anaesthesia involves processing large amounts of information over time. One task of the anaesthetist is to detect substantive changes in physiological variables promptly and reliably. It has been previously demonstrated that a graphical trend display of historical data leads to more rapid detection of such changes. We examined the effect of a graphical indication of the magnitude of Trigg's Tracking Variable, a simple statistically based trend detection algorithm, on the accuracy and latency of the detection of changes in a micro-simulation. Ten anaesthetists each viewed 20 simulations with four variables displayed as the current value with a simple graphical trend display. Values for these variables were generated by a computer model, and updated every second; after a period of stability a change occurred to a new random value at least 10 units from baseline. In 50% of the simulations an indication of the rate of change was given by a five level graphical representation of the value of Trigg's Tracking Variable. Participants were asked to indicate when they thought a change was occurring. Changes were detected 10.9% faster with the trend indicator present (mean 13.1 [SD 3.1] cycles vs 14.6 [SD 3.4] cycles, 95% confidence interval 0.4 to 2.5 cycles, P = 0.013. There was no difference in accuracy of detection (median with trend detection 97% [interquartile range 95 to 100%], without trend detection 100% [98 to 100%]), P = 0.8. We conclude that simple statistical trend detection may speed detection of changes during routine anaesthesia, even when a graphical trend display is present.

  17. Wave optics simulation of statistically rough surface scatter

    Science.gov (United States)

    Lanari, Ann M.; Butler, Samuel D.; Marciniak, Michael; Spencer, Mark F.

    2017-09-01

    The bidirectional reflectance distribution function (BRDF) describes optical scatter from surfaces by relating the incident irradiance to the exiting radiance over the entire hemisphere. Laboratory verification of BRDF models and experimentally populated BRDF databases are hampered by sparsity of monochromatic sources and ability to statistically control the surface features. Numerical methods are able to control surface features, have wavelength agility, and via Fourier methods of wave propagation, may be used to fill the knowledge gap. Monte-Carlo techniques, adapted from turbulence simulations, generate Gaussian distributed and correlated surfaces with an area of 1 cm2 , RMS surface height of 2.5 μm, and correlation length of 100 μm. The surface is centered inside a Kirchhoff absorbing boundary with an area of 16 cm2 to prevent wrap around aliasing in the far field. These surfaces are uniformly illuminated at normal incidence with a unit amplitude plane-wave varying in wavelength from 3 μm to 5 μm. The resultant scatter is propagated to a detector in the far field utilizing multi-step Fresnel Convolution and observed at angles from -2 μrad to 2 μrad. The far field scatter is compared to both a physical wave optics BRDF model (Modified Beckmann Kirchhoff) and two microfacet BRDF Models (Priest, and Cook-Torrance). Modified Beckmann Kirchhoff, which accounts for diffraction, is consistent with simulated scatter for multiple wavelengths for RMS surface heights greater than λ/2. The microfacet models, which assume geometric optics, are less consistent across wavelengths. Both model types over predict far field scatter width for RMS surface heights less than λ/2.

  18. Particle acceleration in regions of magnetic flux emergence: a statistical approach using test-particle- and MHD-simulations

    Science.gov (United States)

    Vlahos, Loukas; Archontis, Vasilis; Isliker, Heinz

    We consider 3D nonlinear MHD simulations of an emerging flux tube, from the convection zone into the corona, focusing on the coronal part of the simulations. We first analyze the statistical nature and spatial structure of the electric field, calculating histograms and making use of iso-contour visualizations. Then test-particle simulations are performed for electrons, in order to study heating and acceleration phenomena, as well as to determine HXR emission. This study is done by comparatively exploring quiet, turbulent explosive, and mildly explosive phases of the MHD simulations. Also, the importance of collisional and relativistic effects is assessed, and the role of the integration time is investigated. Particular aim of this project is to verify the quasi- linear assumptions made in standard transport models, and to identify possible transport effects that cannot be captured with the latter. In order to determine the relation of our results to Fermi acceleration and Fokker-Planck modeling, we determine the standard transport coefficients. After all, we find that the electric field of the MHD simulations must be downscaled in order to prevent an un-physically high degree of acceleration, and the value chosen for the scale factor strongly affects the results. In different MHD time-instances we find heating to take place, and acceleration that depends on the level of MHD turbulence. Also, acceleration appears to be a transient phenomenon, there is a kind of saturation effect, and the parallel dynamics clearly dominate the energetics. The HXR spectra are not yet really compatible with observations, we have though to further explore the scaling of the electric field and the integration times used.

  19. Statistical Models to Assess the Health Effects and to Forecast Ground Level Ozone

    Czech Academy of Sciences Publication Activity Database

    Schlink, U.; Herbath, O.; Richter, M.; Dorling, S.; Nunnari, G.; Cawley, G.; Pelikán, Emil

    2006-01-01

    Roč. 21, č. 4 (2006), s. 547-558 ISSN 1364-8152 R&D Projects: GA AV ČR 1ET400300414 Institutional research plan: CEZ:AV0Z10300504 Keywords : statistical models * ground level ozone * health effects * logistic model * forecasting * prediction performance * neural network * generalised additive model * integrated assessment Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.992, year: 2006

  20. Simulation of statistical systems with not necessarily real and positive probabilities

    International Nuclear Information System (INIS)

    Kalkreuter, T.

    1991-01-01

    A new method to determine expectation values of observables in statistical systems with not necessarily real and positive probabilities is proposed. It is tested in a numerical study of the two-dimensional O(3)-symmetric nonlinear σ-model with Symanzik's one-loop improved lattice action. This model is simulated as polymer system with field dependent activities which can be made positive definite or indefinite by adjusting additive constants of the action. For a system with indefinite activities the new proposal is found to work. It is also verified that local observables are not affected by far-away ploymers with indefinite activities when the system has no long-range order. (orig.)

  1. Alternative interpretations of statistics on health effects of low-level radiation

    International Nuclear Information System (INIS)

    Hamilton, L.D.

    1983-01-01

    Four examples of the interpretation of statistics of data on low-level radiation are reviewed: (a) genetic effects of the atomic bombs at Hiroshima and Nagasaki, (b) cancer at Rocky Flats, (c) childhood leukemia and fallout in Utah, and (d) cancer among workers at the Portsmouth Naval Shipyard. Aggregation of data, adjustment for age, and other problems related to the determination of health effects of low-level radiation are discussed. Troublesome issues related to post hoc analysis are considered

  2. The effect of project-based learning on students' statistical literacy levels for data representation

    Science.gov (United States)

    Koparan, Timur; Güven, Bülent

    2015-07-01

    The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35 in the experimental group and 35 in the control group, took this test twice, one before the application and one after the application. All the raw scores were turned into linear points by using the Winsteps 3.72 modelling program that makes the Rasch analysis and t-tests, and an ANCOVA analysis was carried out with the linear points. Depending on the findings, it was concluded that the project-based learning approach increases students' level of statistical literacy for data representation. Students' levels of statistical literacy before and after the application were shown through the obtained person-item maps.

  3. Identification and simulation for steam generator water level based on Kalman Filter

    International Nuclear Information System (INIS)

    Deng Chen; Zhang Qinshun

    2008-01-01

    In order to effectively control the water level of the steam generator (SG), this paper has set about the state-observer theory in modern control and put forward a method to detect the 'false water level' based on Kalman Filter. Kalman Filter is a efficient tool to estimate state-variable by measured value including noise. For heavy measurement noise of steam flow, constructing a 'false water level' observer by Kalman Filter could availably obtain state variable of 'false water level'. The simulation computing for the dynamics characteristic of nuclear SG water level process under several typically running power was implemented by employing the simulation model. The result shows that the simulation model accurately identifies the 'false water level' produced in the reverse thermal-dynamic effects of nuclear SG water level process. The simulation model can realize the precise analysis of dynamics characteristic for the nuclear SG water level process. It can provide a kind of new ideas for the 'false water level' detecting of SG. (authors)

  4. Understanding advanced statistical methods

    CERN Document Server

    Westfall, Peter

    2013-01-01

    Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...

  5. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Directory of Open Access Journals (Sweden)

    A.-S. Høyer

    2017-12-01

    Full Text Available Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i realistic 3-D training images and (ii an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m  ×  100 m  ×  5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical

  6. Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories

    Directory of Open Access Journals (Sweden)

    Jenny Farmer

    2017-11-01

    Full Text Available Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.

  7. Large-eddy simulation in a mixing tee junction: High-order turbulent statistics analysis

    International Nuclear Information System (INIS)

    Howard, Richard J.A.; Serre, Eric

    2015-01-01

    Highlights: • Mixing and thermal fluctuations in a junction are studied using large eddy simulation. • Adiabatic and conducting steel wall boundaries are tested. • Wall thermal fluctuations are not the same between the flow and the solid. • Solid thermal fluctuations cannot be predicted from the fluid thermal fluctuations. • High-order turbulent statistics show that the turbulent transport term is important. - Abstract: This study analyses the mixing and thermal fluctuations induced in a mixing tee junction with circular cross-sections when cold water flowing in a pipe is joined by hot water from a branch pipe. This configuration is representative of industrial piping systems in which temperature fluctuations in the fluid may cause thermal fatigue damage on the walls. Implicit large-eddy simulations (LES) are performed for equal inflow rates corresponding to a bulk Reynolds number Re = 39,080. Two different thermal boundary conditions are studied for the pipe walls; an insulating adiabatic boundary and a conducting steel wall boundary. The predicted flow structures show a satisfactory agreement with the literature. The velocity and thermal fields (including high-order statistics) are not affected by the heat transfer with the steel walls. However, predicted thermal fluctuations at the boundary are not the same between the flow and the solid, showing that solid thermal fluctuations cannot be predicted by the knowledge of the fluid thermal fluctuations alone. The analysis of high-order turbulent statistics provides a better understanding of the turbulence features. In particular, the budgets of the turbulent kinetic energy and temperature variance allows a comparative analysis of dissipation, production and transport terms. It is found that the turbulent transport term is an important term that acts to balance the production. We therefore use a priori tests to evaluate three different models for the triple correlation

  8. Top-Level Simulation of a Smart-Bolometer Using VHDL Modeling

    Directory of Open Access Journals (Sweden)

    Matthieu DENOUAL

    2012-03-01

    Full Text Available An event-driven modeling technique in standard VHDL is presented in this paper for the high level simulation of a resistive bolometer operating in closed-loop mode and implementing smart functions. The closed-loop mode operation is achieved by the capacitively coupled electrical substitution technique. The event-driven VHDL modeling technique is successfully applied to behavioral modeling and simulation of such a multi-physics system involving optical, thermal and electronics mechanisms. The modeling technique allows the high level simulations for the development and validation of the smart functions algorithms of the future integrated smart-device.

  9. Multi-Accuracy-Level Burning Plasma Simulations

    International Nuclear Information System (INIS)

    Artaud, J. F.; Basiuk, V.; Garcia, J.; Giruzzi, G.; Huynh, P.; Huysmans, G.; Imbeaux, F.; Johner, J.; Scheider, M.

    2007-01-01

    The design of a reactor grade tokamak is based on a hierarchy of tools. We present here three codes that are presently used for the simulations of burning plasmas. At the first level there is a 0-dimensional code that allows to choose a reasonable range of global parameters; in our case the HELIOS code was used for this task. For the second level we have developed a mixed 0-D / 1-D code called METIS that allows to study the main properties of a burning plasma, including profiles and all heat and current sources, but always under the constraint of energy and other empirical scaling laws. METIS is a fast code that permits to perform a large number of runs (a run takes about one minute) and design the main features of a scenario, or validate the results of the 0-D code on a full time evolution. At the top level, we used the full 1D1/2 suite of codes CRONOS that gives access to a detailed study of the plasma profiles evolution. CRONOS can use a variety of modules for source terms and transport coefficients computation with different level of complexity and accuracy: from simple estimators to highly sophisticated physics calculations. Thus it is possible to vary the accuracy of burning plasma simulations, as a trade-off with computation time. A wide range of scenario studies can thus be made with CRONOS and then validated with post-processing tools like MHD stability analysis. We will present in this paper results of this multi-level analysis applied to the ITER hybrid scenario. This specific example will illustrate the importance of having several tools for the study of burning plasma scenarios, especially in a domain that present devices cannot access experimentally. (Author)

  10. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    International Nuclear Information System (INIS)

    Rosa, B.; Parishani, H.; Ayala, O.; Wang, L.-P.

    2015-01-01

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynolds number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate

  11. Mesoscale model simulation of low level equatorial winds over Borneo during the haze episode of September 1997

    Science.gov (United States)

    Mahmud, Mastura

    2009-08-01

    The large-scale vegetation fires instigated by the local farmers during the dry period of the major El Niño event in 1997 can be considered as one of the worst environmental disasters that have occurred in southeast Asia in recent history. This study investigated the local meteorology characteristics of an equatorial environment within a domain that includes the northwestern part of Borneo from the 17 to 27 September 1997 during the height of the haze episode by utilizing a limited area three-dimensional meteorological and dispersion model, The Air Pollution Model (TAPM). Daily land and sea breeze conditions near the northwestern coast of Borneo in the state of Sarawak, Malaysia were predicted with moderate success by the index of agreement of less than one between the observed and simulated values for wind speed and a slight overprediction of 2.3 of the skill indicator that evaluates the standard deviation to the observed values. The innermost domain of study comprises an area of 24,193 km2, from approximately 109°E to 111°E, and from 1°N to 2.3°N, which includes a part of the South China Sea. Tracer analysis of air particles that were sourced in the state of Sarawak on the island of Borneo verified the existence of the landward and shoreward movements of the air during the simulation of the low level wind field. Polluted air particles were transported seawards during night-time, and landwards during daytime, highlighting the recirculation features of aged and newer air particles during the length of eleven days throughout the model simulation. Near calm conditions at low levels were simulated by the trajectory analysis from midnight to mid-day on the 22 of September 1997. Low-level turbulence within the planetary boundary layer in terms of the total kinetic energy was weak, congruent with the weak strength of low level winds that reduced the ability of the air to transport the pollutants. Statistical evaluation showed that parameters such as the systematic

  12. The Profile of Creativity and Proposing Statistical Problem Quality Level Reviewed From Cognitive Style

    Science.gov (United States)

    Awi; Ahmar, A. S.; Rahman, A.; Minggi, I.; Mulbar, U.; Asdar; Ruslan; Upu, H.; Alimuddin; Hamda; Rosidah; Sutamrin; Tiro, M. A.; Rusli

    2018-01-01

    This research aims to reveal the profile about the level of creativity and the ability to propose statistical problem of students at Mathematics Education 2014 Batch in the State University of Makassar in terms of their cognitive style. This research uses explorative qualitative method by giving meta-cognitive scaffolding at the time of research. The hypothesis of research is that students who have field independent (FI) cognitive style in statistics problem posing from the provided information already able to propose the statistical problem that can be solved and create new data and the problem is already been included as a high quality statistical problem, while students who have dependent cognitive field (FD) commonly are still limited in statistics problem posing that can be finished and do not load new data and the problem is included as medium quality statistical problem.

  13. Impact of Assimilation on Heavy Rainfall Simulations Using WRF Model: Sensitivity of Assimilation Results to Background Error Statistics

    Science.gov (United States)

    Rakesh, V.; Kantharao, B.

    2017-03-01

    Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events

  14. Statistical aspects of determinantal point processes

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper; Rubak, Ege Holger

    The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical...... inference. We pay special attention to stationary DPPs, where we give a simple condition ensuring their existence, construct parametric models, describe how they can be well approximated so that the likelihood can be evaluated and realizations can be simulated, and discuss how statistical inference...

  15. Energy Level Statistics of SO(5) Limit of Super-symmetry U(6/4) in Interacting Boson-Fermion Model

    International Nuclear Information System (INIS)

    Bai Hongbo; Zhang Jinfu; Zhou Xianrong

    2005-01-01

    We study the energy level statistics of the SO(5) limit of super-symmetry U(6/4) in odd-A nucleus using the interacting boson-fermion model. The nearest neighbor spacing distribution (NSD) and the spectral rigidity (Δ 3 ) are investigated, and the factors that affect the properties of level statistics are also discussed. The results show that the boson number N is a dominant factor. If N is small, both the interaction strengths of subgroups SO B (5) and SO BF (5) and the spin play important roles in the energy level statistics, however, along with the increase of N, the statistics distribution would tend to be in Poisson form.

  16. Can Family Planning Service Statistics Be Used to Track Population-Level Outcomes?

    Science.gov (United States)

    Magnani, Robert J; Ross, John; Williamson, Jessica; Weinberger, Michelle

    2018-03-21

    The need for annual family planning program tracking data under the Family Planning 2020 (FP2020) initiative has contributed to renewed interest in family planning service statistics as a potential data source for annual estimates of the modern contraceptive prevalence rate (mCPR). We sought to assess (1) how well a set of commonly recorded data elements in routine service statistics systems could, with some fairly simple adjustments, track key population-level outcome indicators, and (2) whether some data elements performed better than others. We used data from 22 countries in Africa and Asia to analyze 3 data elements collected from service statistics: (1) number of contraceptive commodities distributed to clients, (2) number of family planning service visits, and (3) number of current contraceptive users. Data quality was assessed via analysis of mean square errors, using the United Nations Population Division World Contraceptive Use annual mCPR estimates as the "gold standard." We also examined the magnitude of several components of measurement error: (1) variance, (2) level bias, and (3) slope (or trend) bias. Our results indicate modest levels of tracking error for data on commodities to clients (7%) and service visits (10%), and somewhat higher error rates for data on current users (19%). Variance and slope bias were relatively small for all data elements. Level bias was by far the largest contributor to tracking error. Paired comparisons of data elements in countries that collected at least 2 of the 3 data elements indicated a modest advantage of data on commodities to clients. None of the data elements considered was sufficiently accurate to be used to produce reliable stand-alone annual estimates of mCPR. However, the relatively low levels of variance and slope bias indicate that trends calculated from these 3 data elements can be productively used in conjunction with the Family Planning Estimation Tool (FPET) currently used to produce annual m

  17. SU-E-J-82: Intra-Fraction Proton Beam-Range Verification with PET Imaging: Feasibility Studies with Monte Carlo Simulations and Statistical Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lou, K [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Rice University, Houston, TX (United States); Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Clark, J [Rice University, Houston, TX (United States)

    2014-06-01

    Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is

  18. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  19. A random matrix approach to the crossover of energy-level statistics from Wigner to Poisson

    International Nuclear Information System (INIS)

    Datta, Nilanjana; Kunz, Herve

    2004-01-01

    We analyze a class of parametrized random matrix models, introduced by Rosenzweig and Porter, which is expected to describe the energy level statistics of quantum systems whose classical dynamics varies from regular to chaotic as a function of a parameter. We compute the generating function for the correlations of energy levels, in the limit of infinite matrix size. The crossover between Poisson and Wigner statistics is measured by a renormalized coupling constant. The model is exactly solved in the sense that, in the limit of infinite matrix size, the energy-level correlation functions and their generating function are given in terms of a finite set of integrals

  20. Implementing the “Big Data” Concept in Official Statistics

    OpenAIRE

    О. V.

    2017-01-01

    Big data is a huge resource that needs to be used at all levels of economic planning. The article is devoted to the study of the development of the concept of “Big Data” in the world and its impact on the transformation of statistical simulation of economic processes. Statistics at the current stage should take into account the complex system of international economic relations, which functions in the conditions of globalization and brings new forms of economic development in small open ec...

  1. Statistical aspects of determinantal point processes

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper; Rubak, Ege

    The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical infer...

  2. Statistical modelling of monthly mean sea level at coastal tide gauge stations along the Indian subcontinent

    Digital Repository Service at National Institute of Oceanography (India)

    Srinivas, K.; Das, V.K.; DineshKumar, P.K.

    This study investigates the suitability of statistical models for their predictive potential for the monthly mean sea level at different stations along the west and east coasts of the Indian subcontinent. Statistical modelling of the monthly mean...

  3. Statistical criteria for characterizing irradiance time series.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.

    2010-10-01

    We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.

  4. A resilient and efficient CFD framework: Statistical learning tools for multi-fidelity and heterogeneous information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-09-01

    Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in

  5. Numerical simulations on self-leveling behaviors with cylindrical debris bed

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Liancheng, E-mail: Liancheng.guo@kit.edu [Institute for Nuclear and Energy Technologies (IKET), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Morita, Koji, E-mail: morita@nucl.kyushu-u.ac.jp [Faculty of Engineering, Kyushu University, 2-3-7, 744 Motooka, Nishi-ku, Fukuoka 819-0395 (Japan); Tobita, Yoshiharu, E-mail: tobita.yoshiharu@jaea.go.jp [Fast Reactor Safety Technology Development Department, Japan Atomic Energy Agency, 4002 Narita, O-arai, Ibaraki 311-1393 (Japan)

    2017-04-15

    Highlights: • A 3D coupled method was developed by combining DEM with the multi-fluid model of SIMMER-IV code. • The method was validated by performing numerical simulations on a series of experiments with cylindrical particle bed. • Reasonable agreement can demonstrate the applicability of the method in reproducing the self-leveling behavior. • Sensitivity analysis on some model parameters was performed to assess their impacts. - Abstract: The postulated core disruptive accidents (CDAs) are regarded as particular difficulties in the safety analysis of liquid-metal fast reactors (LMFRs). In the CDAs, core debris may settle on the core-support structure and form conic bed mounds. Then debris bed can be levelled by the heat convection and vaporization of surrounding coolant sodium, which is named “self-leveling behavior”. The self-leveling behavior is a crucial issue in the safety analysis, due to its significant effect on the relocation of molten core and heat-removal capability of the debris bed. Considering its complicate multiphase mechanism, a comprehensive computational tool is needed to reasonably simulate transient particle behavior as well as thermal-hydraulic phenomenon of surrounding fluid phases. The SIMMER program is a successful computer code initially developed as an advanced tool for CDA analysis of LMFRs. It is a multi-velocity-field, multiphase, multicomponent, Eulerian, fluid dynamics code coupled with a fuel-pin model and a space- and energy-dependent neutron kinetics model. Until now, the code has been successfully applied in numerical simulations for reproducing key thermal-hydraulic phenomena involved in CDAs as well as performing reactor safety assessment. However, strong interactions between massive solid particles as well as particle characteristics in multiphase flows were not taken into consideration in its fluid-dynamics models. To solve this problem, a new method is developed by combining the discrete element method (DEM

  6. Examining publication bias—a simulation-based evaluation of statistical tests on publication bias

    Directory of Open Access Journals (Sweden)

    Andreas Schneck

    2017-11-01

    Full Text Available Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. Methods Four tests on publication bias, Egger’s test (FAT, p-uniform, the test of excess significance (TES, as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100% were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5, effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500, and the number of observations for the publication bias tests (K = 100, 1,000 were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a

  7. Statistics of Deep Convection in the Congo Basin Derived From High-Resolution Simulations.

    Science.gov (United States)

    White, B.; Stier, P.; Kipling, Z.; Gryspeerdt, E.; Taylor, S.

    2016-12-01

    Convection transports moisture, momentum, heat and aerosols through the troposphere, and so the temporal variability of convection is a major driver of global weather and climate. The Congo basin is home to some of the most intense convective activity on the planet and is under strong seasonal influence of biomass burning aerosol. However, deep convection in the Congo basin remains under studied compared to other regions of tropical storm systems, especially when compared to the neighbouring, relatively well-understood West African climate system. We use the WRF model to perform a high-resolution, cloud-system resolving simulation to investigate convective storm systems in the Congo. Our setup pushes the boundaries of current computational resources, using a 1 km grid length over a domain covering millions of square kilometres and for a time period of one month. This allows us to draw statistical conclusions on the nature of the simulated storm systems. Comparing data from satellite observations and the model enables us to quantify the diurnal variability of deep convection in the Congo basin. This approach allows us to evaluate our simulations despite the lack of in-situ observational data. This provides a more comprehensive analysis of the diurnal cycle than has previously been shown. Further, we show that high-resolution convection-permitting simulations performed over near-seasonal timescales can be used in conjunction with satellite observations as an effective tool to evaluate new convection parameterisations.

  8. Statistical significance of trends in monthly heavy precipitation over the US

    KAUST Repository

    Mahajan, Salil

    2011-05-11

    Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.

  9. Fluidized-bed calcination of simulated commercial high-level radioactive wastes

    International Nuclear Information System (INIS)

    Freeby, W.A.

    1975-11-01

    Work is in progress at the Idaho Chemical Processing Plant to verify process flowsheets for converting simulated commercial high-level liquid wastes to granular solids using the fluidized-bed calcination process. Primary emphasis in the series of runs reported was to define flowsheets for calcining simulated Allied-General Nuclear Services (AGNS) waste and to evaluate product properties significant to calcination, solids storage, or post treatment. Pilot-plant studies using simulated high-level acid wastes representative of those to be produced by Nuclear Fuel Services, Inc. (NFS) are also included. Combined AGNS high-level and intermediate-level waste (0.26 M Na in blend) was successfully calcined when powdered iron was added (to result in a Na/Fe mole ratio of 1.0) to the feed to prevent particle agglomeration due to sodium nitrate. Long-term runs (approximately 100 hours) showed that calcination of the combined waste is practical. Concentrated AGNS waste containing sodium at concentrations less than 0.2 M were calcined successfully; concentrated waste containing 1.13 M Na calcined successfully when powdered iron was added to the feed to suppress sodium nitrate formation. Calcination of dilute AGNS waste by conventional fluid-bed techniques was unsuccessful due to the inability to control bed particle size--both particle size and bed level decreased. Fluid-bed solidification of AGNS dilute waste at conditions in which most of the calcined solids left the calciner vessel with the off-gas was successful. In such a concept, the steady-state composition of the bed material would be approximately 22 wt percent calcined solids deposited on inert particles. Calcination of simulated NFS acid waste indicated that solidification by the fluid-bed process is feasible

  10. Statistical Literacy: Simulations with Dolphins

    Science.gov (United States)

    Strayer, Jeremy; Matuszewski, Amber

    2016-01-01

    In this article, Strayer and Matuszewski present a six-phase strategy that teachers can use to help students develop a conceptual understanding of inferential hypothesis testing through simulation. As Strayer and Matuszewski discuss the strategy, they describe each phase in general, explain how they implemented the phase while teaching their…

  11. Statistical analysis of Thematic Mapper Simulator data for the geobotanical discrimination of rock types in southwest Oregon

    Science.gov (United States)

    Morrissey, L. A.; Weinstock, K. J.; Mouat, D. A.; Card, D. H.

    1984-01-01

    An evaluation of Thematic Mapper Simulator (TMS) data for the geobotanical discrimination of rock types based on vegetative cover characteristics is addressed in this research. A methodology for accomplishing this evaluation utilizing univariate and multivariate techniques is presented. TMS data acquired with a Daedalus DEI-1260 multispectral scanner were integrated with vegetation and geologic information for subsequent statistical analyses, which included a chi-square test, an analysis of variance, stepwise discriminant analysis, and Duncan's multiple range test. Results indicate that ultramafic rock types are spectrally separable from nonultramafics based on vegetative cover through the use of statistical analyses.

  12. Rainfall Downscaling Conditional on Upper-air Variables: Assessing Rainfall Statistics in a Changing Climate

    Science.gov (United States)

    Langousis, Andreas; Deidda, Roberto; Marrocu, Marino; Kaleris, Vassilios

    2014-05-01

    Due to its intermittent and highly variable character, and the modeling parameterizations used, precipitation is one of the least well reproduced hydrologic variables by both Global Climate Models (GCMs) and Regional Climate Models (RCMs). This is especially the case at a regional level (where hydrologic risks are assessed) and at small temporal scales (e.g. daily) used to run hydrologic models. In an effort to remedy those shortcomings and assess the effect of climate change on rainfall statistics at hydrologically relevant scales, Langousis and Kaleris (2013) developed a statistical framework for simulation of daily rainfall intensities conditional on upper air variables. The developed downscaling scheme was tested using atmospheric data from the ERA-Interim archive (http://www.ecmwf.int/research/era/do/get/index), and daily rainfall measurements from western Greece, and was proved capable of reproducing several statistical properties of actual rainfall records, at both annual and seasonal levels. This was done solely by conditioning rainfall simulation on a vector of atmospheric predictors, properly selected to reflect the relative influence of upper-air variables on ground-level rainfall statistics. In this study, we apply the developed framework for conditional rainfall simulation using atmospheric data from different GCM/RCM combinations. This is done using atmospheric data from the ENSEMBLES project (http://ensembleseu.metoffice.com), and daily rainfall measurements for an intermediate-sized catchment in Italy; i.e. the Flumendosa catchment. Since GCM/RCM products are suited to reproduce the local climatology in a statistical sense (i.e. in terms of relative frequencies), rather than ensuring a one-to-one temporal correspondence between observed and simulated fields (i.e. as is the case for ERA-interim reanalysis data), we proceed in three steps: a) we use statistical tools to establish a linkage between ERA-Interim upper-air atmospheric forecasts and

  13. Large scale statistics for computational verification of grain growth simulations with experiments

    International Nuclear Information System (INIS)

    Demirel, Melik C.; Kuprat, Andrew P.; George, Denise C.; Straub, G.K.; Misra, Amit; Alexander, Kathleen B.; Rollett, Anthony D.

    2002-01-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. We have previously showed a strong similarity between small-scale grain growth experiments and anisotropic three-dimensional simulations obtained from the Electron Backscattered Diffraction (EBSD) measurements. Using the same technique, we obtained 5170-grain data from an Aluminum-film (120 (micro)m thick) with a columnar grain structure. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 C. Characterization of the structures and properties of grain boundary networks (GBN) to produce desirable microstructures is one of the fundamental problems in interface science. There is an ongoing research for the development of new experimental and analytical techniques in order to obtain and synthesize information related to GBN. The grain boundary energy and mobility data were characterized by Electron Backscattered Diffraction (EBSD) technique and Atomic Force Microscopy (AFM) observations (i.e., for ceramic MgO and for the metal Al). Grain boundary energies are extracted from triple junction (TJ) geometry considering the local equilibrium condition at TJ's. Relative boundary mobilities were also extracted from TJ's through a statistical/multiscale analysis. Additionally, there are recent theoretical developments of grain boundary evolution in microstructures. In this paper, a new technique for three-dimensional grain growth simulations was used to simulate interface migration

  14. Simulation on a car interior aerodynamic noise control based on statistical energy analysis

    Science.gov (United States)

    Chen, Xin; Wang, Dengfeng; Ma, Zhengdong

    2012-09-01

    How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.

  15. Statistical methods for determination of background levels for naturally occuring radionuclides in soil at a RCRA facility

    International Nuclear Information System (INIS)

    Guha, S.; Taylor, J.H.

    1996-01-01

    It is critical that summary statistics on background data, or background levels, be computed based on standardized and defensible statistical methods because background levels are frequently used in subsequent analyses and comparisons performed by separate analysts over time. The final background for naturally occurring radionuclide concentrations in soil at a RCRA facility, and the associated statistical methods used to estimate these concentrations, are presented. The primary objective is to describe, via a case study, the statistical methods used to estimate 95% upper tolerance limits (UTL) on radionuclide background soil data sets. A 95% UTL on background samples can be used as a screening level concentration in the absence of definitive soil cleanup criteria for naturally occurring radionuclides. The statistical methods are based exclusively on EPA guidance. This paper includes an introduction, a discussion of the analytical results for the radionuclides and a detailed description of the statistical analyses leading to the determination of 95% UTLs. Soil concentrations reported are based on validated data. Data sets are categorized as surficial soil; samples collected at depths from zero to one-half foot; and deep soil, samples collected from 3 to 5 feet. These data sets were tested for statistical outliers and underlying distributions were determined by using the chi-squared test for goodness-of-fit. UTLs for the data sets were then computed based on the percentage of non-detects and the appropriate best-fit distribution (lognormal, normal, or non-parametric). For data sets containing greater than approximately 50% nondetects, nonparametric UTLs were computed

  16. A multivariate statistical study on a diversified data gathering system for nuclear power plants

    International Nuclear Information System (INIS)

    Samanta, P.K.; Teichmann, T.; Levine, M.M.; Kato, W.Y.

    1989-02-01

    In this report, multivariate statistical methods are presented and applied to demonstrate their use in analyzing nuclear power plant operational data. For analyses of nuclear power plant events, approaches are presented for detecting malfunctions and degradations within the course of the event. At the system level, approaches are investigated as a means of diagnosis of system level performance. This involves the detection of deviations from normal performance of the system. The input data analyzed are the measurable physical parameters, such as steam generator level, pressurizer water level, auxiliary feedwater flow, etc. The study provides the methodology and illustrative examples based on data gathered from simulation of nuclear power plant transients and computer simulation of a plant system performance (due to lack of easily accessible operational data). Such an approach, once fully developed, can be used to explore statistically the detection of failure trends and patterns and prevention of conditions with serious safety implications. 33 refs., 18 figs., 9 tabs

  17. Misuse of statistics in the interpretation of data on low-level radiation

    International Nuclear Information System (INIS)

    Hamilton, L.D.

    1982-01-01

    Four misuses of statistics in the interpretation of data of low-level radiation are reviewed: (1) post-hoc analysis and aggregation of data leading to faulty conclusions in the reanalysis of genetic effects of the atomic bomb, and premature conclusions on the Portsmouth Naval Shipyard data; (2) inappropriate adjustment for age and ignoring differences between urban and rural areas leading to potentially spurious increase in incidence of cancer at Rocky Flats; (3) hazard of summary statistics based on ill-conditioned individual rates leading to spurious association between childhood leukemia and fallout in Utah; and (4) the danger of prematurely published preliminary work with inadequate consideration of epidemiological problems - censored data - leading to inappropriate conclusions, needless alarm at the Portsmouth Naval Shipyard, and diversion of scarce research funds

  18. Misuse of statistics in the interpretation of data on low-level radiation

    Energy Technology Data Exchange (ETDEWEB)

    Hamilton, L.D.

    1982-01-01

    Four misuses of statistics in the interpretation of data of low-level radiation are reviewed: (1) post-hoc analysis and aggregation of data leading to faulty conclusions in the reanalysis of genetic effects of the atomic bomb, and premature conclusions on the Portsmouth Naval Shipyard data; (2) inappropriate adjustment for age and ignoring differences between urban and rural areas leading to potentially spurious increase in incidence of cancer at Rocky Flats; (3) hazard of summary statistics based on ill-conditioned individual rates leading to spurious association between childhood leukemia and fallout in Utah; and (4) the danger of prematurely published preliminary work with inadequate consideration of epidemiological problems - censored data - leading to inappropriate conclusions, needless alarm at the Portsmouth Naval Shipyard, and diversion of scarce research funds.

  19. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  20. Cluster-level statistical inference in fMRI datasets: The unexpected behavior of random fields in high dimensions.

    Science.gov (United States)

    Bansal, Ravi; Peterson, Bradley S

    2018-06-01

    Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal

  1. Using statistical sensitivities for adaptation of a best-estimate thermo-hydraulic simulation model

    International Nuclear Information System (INIS)

    Liu, X.J.; Kerner, A.; Schaefer, A.

    2010-01-01

    On-line adaptation of best-estimate simulations of NPP behaviour to time-dependent measurement data can be used to insure that simulations performed in parallel to plant operation develop synchronously with the real plant behaviour even over extended periods of time. This opens a range of applications including operator support in non-standard-situations, improving diagnostics and validation of measurements in real plants or experimental facilities. A number of adaptation methods have been proposed and successfully applied to control problems. However, these methods are difficult to be applied to best-estimate thermal-hydraulic codes, such as TRACE and ATHLET, with their large nonlinear differential equation systems and sophisticated time integration techniques. This paper presents techniques to use statistical sensitivity measures to overcome those problems by reducing the number of parameters subject to adaptation. It describes how to identify the most significant parameters for adaptation and how this information can be used by combining: -decomposition techniques splitting the system into a small set of component parts with clearly defined interfaces where boundary conditions can be derived from the measurement data, -filtering techniques to insure that the time frame for adaptation is meaningful, -numerical sensitivities to find minimal error conditions. The suitability of combining those techniques is shown by application to an adaptive simulation of the PKL experiment.

  2. Discrete ellipsoidal statistical BGK model and Burnett equations

    Science.gov (United States)

    Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei

    2018-06-01

    A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.

  3. CFD simulation and statistical analysis of moisture transfer into an electronic enclosure

    DEFF Research Database (Denmark)

    Shojaee Nasirabadi, Parizad; Jabbaribehnam, Mirmasoud; Hattel, Jesper Henri

    2017-01-01

    CFD model for the isothermal case. The model is then combined with a two level factorial design to identify the significant factors as well as the potential interactions us- ing the numerical simulation results. In the second part of this study, a non-isothermal case is studied, in which the enclosure...

  4. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  5. Geant4 electromagnetic physics for high statistic simulation of LHC experiments

    CERN Document Server

    Allison, J; Bagulya, A; Champion, C; Elles, S; Garay, F; Grichine, V; Howard, A; Incerti, S; Ivanchenko, V; Jacquemier, J; Maire, M; Mantero, A; Nieminen, P; Pandola, L; Santin, G; Sawkey, D; Schalicke, A; Urban, L

    2012-01-01

    An overview of the current status of electromagnetic physics (EM) of the Geant4 toolkit is presented. Recent improvements are focused on the performance of large scale production for LHC and on the precision of simulation results over a wide energy range. Significant efforts have been made to improve the accuracy without compromising of CPU speed for EM particle transport. New biasing options have been introduced, which are applicable to any EM process. These include algorithms to enhance and suppress processes, force interactions or splitting of secondary particles. It is shown that the performance of the EM sub-package is improved. We will report extensions of the testing suite allowing high statistics validation of EM physics. It includes validation of multiple scattering, bremsstrahlung and other models. Cross checks between standard and low-energy EM models have been performed using evaluated data libraries and reference benchmark results.

  6. Statistics for long irregular wave run-up on a plane beach from direct numerical simulations

    Science.gov (United States)

    Didenkulova, Ira; Senichev, Dmitry; Dutykh, Denys

    2017-04-01

    Very often for global and transoceanic events, due to the initial wave transformation, refraction, diffraction and multiple reflections from coastal topography and underwater bathymetry, the tsunami approaches the beach as a very long wave train, which can be considered as an irregular wave field. The prediction of possible flooding and properties of the water flow on the coast in this case should be done statistically taking into account the formation of extreme (rogue) tsunami wave on a beach. When it comes to tsunami run-up on a beach, the most used mathematical model is the nonlinear shallow water model. For a beach of constant slope, the nonlinear shallow water equations have rigorous analytical solution, which substantially simplifies the mathematical formulation. In (Didenkulova et al. 2011) we used this solution to study statistical characteristics of the vertical displacement of the moving shoreline and its horizontal velocity. The influence of the wave nonlinearity was approached by considering modifications of probability distribution of the moving shoreline and its horizontal velocity for waves of different amplitudes. It was shown that wave nonlinearity did not affect the probability distribution of the velocity of the moving shoreline, while the vertical displacement of the moving shoreline was affected substantially demonstrating the longer duration of coastal floods with an increase in the wave nonlinearity. However, this analysis did not take into account the actual transformation of irregular wave field offshore to oscillations of the moving shoreline on a slopping beach. In this study we would like to cover this gap by means of extensive numerical simulations. The modeling is performed in the framework of nonlinear shallow water equations, which are solved using a modern shock-capturing finite volume method. Although the shallow water model does not pursue the wave breaking and bore formation in a general sense (including the water surface

  7. The Effect of Project Based Learning on the Statistical Literacy Levels of Student 8th Grade

    Science.gov (United States)

    Koparan, Timur; Güven, Bülent

    2014-01-01

    This study examines the effect of project based learning on 8th grade students' statistical literacy levels. A performance test was developed for this aim. Quasi-experimental research model was used in this article. In this context, the statistics were taught with traditional method in the control group and it was taught using project based…

  8. The Heuristics of Statistical Argumentation: Scaffolding at the Postsecondary Level

    Science.gov (United States)

    Pardue, Teneal Messer

    2017-01-01

    Language plays a key role in statistics and, by extension, in statistics education. Enculturating students into the practice of statistics requires preparing them to communicate results of data analysis. Statistical argumentation is one way of providing structure to facilitate discourse in the statistics classroom. In this study, a teaching…

  9. Uncertainties Related to Extreme Event Statistics of Sewer System Surcharge and Overflow

    DEFF Research Database (Denmark)

    Schaarup-Jensen, Kjeld; Johansen, C.; Thorndahl, Søren Liedtke

    2005-01-01

    Today it is common practice - in the major part of Europe - to base design of sewer systems in urban areas on recommended minimum values of flooding frequencies related to either pipe top level, basement level in buildings or level of road surfaces. Thus storm water runoff in sewer systems is only...... proceeding in an acceptable manner, if flooding of these levels is having an average return period bigger than a predefined value. This practice is also often used in functional analysis of existing sewer systems. If a sewer system can fulfil recommended flooding frequencies or not, can only be verified...... by performing long term simulations - using a sewer flow simulation model - and draw up extreme event statistics from the model simulations. In this context it is important to realize that uncertainties related to the input parameters of rainfall runoff models will give rise to uncertainties related...

  10. Investigation of free level fluctuations in a simulated model of a sodium cooled Fast Breeder Reactor using pulsating conductance monitoring device

    International Nuclear Information System (INIS)

    Aggarwal, P.K.; Pandey, G.K.; Malathi, N.; Arun, A.D.; Ananthanarayanan, R.; Banerjee, I.; Sahoo, P.; Padmakumar, G.; Murali, N.

    2012-01-01

    Highlights: ► An innovative approach for measurement of water level fluctuation is presented. ► Measurement was conducted with a PC based pulsating type level sensor. ► Deployed the technique in monitoring level fluctuation in PFBR simulated facility. ► The technique helped in validation of hot pool design of PFBR, India. - Abstract: A high resolution measurement technique for rapid and accurate monitoring of water level using an in-house built pulsating conductance monitoring device is presented. The technique has the capability of online monitoring of any sudden shift in water level in a reservoir which is subjected to rapid fluctuations due to any external factor. We have deployed this novel technique for real time monitoring of water level fluctuations in a specially designed ¼ scale model of the Prototype Fast Breeder Reactor (PFBR) at Kalpakkam, India. The water level measurements in various locations of the simulated test facility were carried out in different experimental campaigns with and without inclusion of thermal baffles to it in specific operating conditions as required by the reactor designers. The amplitudes and the frequencies of fluctuations with required statistical parameters in hot water pool of the simulated model were evaluated from the online time versus water level plot in more convenient way using system software package. From experimental results it is computed that the maximum free level fluctuation in the hot pool of PFBR with baffle plates provided on the inner vessel is 30 mm which is considerably less than the value (∼82 mm) obtained without having any baffle plates. The present work provided useful information for assessment of appropriate design which would be adopted in the PFBR for safe operation of the reactor.

  11. The use of Monte-Carlo simulation and order statistics for uncertainty analysis of a LBLOCA transient (LOFT-L2-5)

    International Nuclear Information System (INIS)

    Chojnacki, E.; Benoit, J.P.

    2007-01-01

    Best estimate computer codes are increasingly used in nuclear industry for the accident management procedures and have been planned to be used for the licensing procedures. Contrary to conservative codes which are supposed to give penalizing results, best estimate codes attempt to calculate accidental transients in a realistic way. It becomes therefore of prime importance, in particular for technical organization as IRSN in charge of safety assessment, to know the uncertainty on the results of such codes. Thus, CSNI has sponsored few years ago (published in 1998) the Uncertainty Methods Study (UMS) program on uncertainty methodologies used for a SBLOCA transient (LSTF-CL-18) and is now supporting the BEMUSE program for a LBLOCA transient (LOFT-L2-5). The large majority of BEMUSE participants (9 out of 10) use uncertainty methodologies based on a probabilistic modelling and all of them use Monte-Carlo simulations to propagate the uncertainties through their computer codes. Also, all of 'probabilistic participants' intend to use order statistics to determine the sampling size of the Monte-Carlo simulation and to derive the uncertainty ranges associated to their computer calculations. The first aim of this paper is to remind the advantages and also the assumptions of the probabilistic modelling and more specifically of order statistics (as Wilks' formula) in uncertainty methodologies. Indeed Monte-Carlo methods provide flexible and extremely powerful techniques for solving many of the uncertainty propagation problems encountered in nuclear safety analysis. However it is important to keep in mind that probabilistic methods are data intensive. That means, probabilistic methods cannot produce robust results unless a considerable body of information has been collected. A main interest of the use of order statistics results is to allow to take into account an unlimited number of uncertain parameters and, from a restricted number of code calculations to provide statistical

  12. STATCONT: A statistical continuum level determination method for line-rich sources

    Science.gov (United States)

    Sánchez-Monge, Á.; Schilke, P.; Ginsburg, A.; Cesaroni, R.; Schmiedeke, A.

    2018-01-01

    STATCONT is a python-based tool designed to determine the continuum emission level in spectral data, in particular for sources with a line-rich spectrum. The tool inspects the intensity distribution of a given spectrum and automatically determines the continuum level by using different statistical approaches. The different methods included in STATCONT are tested against synthetic data. We conclude that the sigma-clipping algorithm provides the most accurate continuum level determination, together with information on the uncertainty in its determination. This uncertainty can be used to correct the final continuum emission level, resulting in the here called `corrected sigma-clipping method' or c-SCM. The c-SCM has been tested against more than 750 different synthetic spectra reproducing typical conditions found towards astronomical sources. The continuum level is determined with a discrepancy of less than 1% in 50% of the cases, and less than 5% in 90% of the cases, provided at least 10% of the channels are line free. The main products of STATCONT are the continuum emission level, together with a conservative value of its uncertainty, and datacubes containing only spectral line emission, i.e., continuum-subtracted datacubes. STATCONT also includes the option to estimate the spectral index, when different files covering different frequency ranges are provided.

  13. Return period estimates of extreme sea level along the east coast of India from numerical simulations

    Digital Repository Service at National Institute of Oceanography (India)

    Sindhu, B.; Unnikrishnan, A.S.

    . The simulated total sea level and the surge component were obtained for each event. The simulated peak levels showed good agreement with the observations available at few stations. The annual maxima of sea levels, extracted from the simulations, were fitted...

  14. Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study

    Directory of Open Access Journals (Sweden)

    Vickers Andrew J

    2008-11-01

    Full Text Available Abstract Background A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives. Methods A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC corrected for verification bias varying both the rate and mechanism of verification. Results In a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5th – 97.5th centile range constituted as much as 60% of the possible range of AUCs for some simulations. Conclusion Screening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.

  15. PhyloSim - Monte Carlo simulation of sequence evolution in the R statistical computing environment

    Directory of Open Access Journals (Sweden)

    Massingham Tim

    2011-04-01

    Full Text Available Abstract Background The Monte Carlo simulation of sequence evolution is routinely used to assess the performance of phylogenetic inference methods and sequence alignment algorithms. Progress in the field of molecular evolution fuels the need for more realistic and hence more complex simulations, adapted to particular situations, yet current software makes unreasonable assumptions such as homogeneous substitution dynamics or a uniform distribution of indels across the simulated sequences. This calls for an extensible simulation framework written in a high-level functional language, offering new functionality and making it easy to incorporate further complexity. Results PhyloSim is an extensible framework for the Monte Carlo simulation of sequence evolution, written in R, using the Gillespie algorithm to integrate the actions of many concurrent processes such as substitutions, insertions and deletions. Uniquely among sequence simulation tools, PhyloSim can simulate arbitrarily complex patterns of rate variation and multiple indel processes, and allows for the incorporation of selective constraints on indel events. User-defined complex patterns of mutation and selection can be easily integrated into simulations, allowing PhyloSim to be adapted to specific needs. Conclusions Close integration with R and the wide range of features implemented offer unmatched flexibility, making it possible to simulate sequence evolution under a wide range of realistic settings. We believe that PhyloSim will be useful to future studies involving simulated alignments.

  16. Assessment of ionospheric Joule heating by GUMICS-4 MHD simulation, AMIE, and satellite-based statistics: towards a synthesis

    Directory of Open Access Journals (Sweden)

    M. Palmroth

    2005-09-01

    Full Text Available We investigate the Northern Hemisphere Joule heating from several observational and computational sources with the purpose of calibrating a previously identified functional dependence between solar wind parameters and ionospheric total energy consumption computed from a global magnetohydrodynamic (MHD simulation (Grand Unified Magnetosphere Ionosphere Coupling Simulation, GUMICS-4. In this paper, the calibration focuses on determining the amount and temporal characteristics of Northern Hemisphere Joule heating. Joule heating during a substorm is estimated from global observations, including electric fields provided by Super Dual Auroral Network (SuperDARN and Pedersen conductances given by the ultraviolet (UV and X-ray imagers on board the Polar satellite. Furthermore, Joule heating is assessed from several activity index proxies, large statistical surveys, assimilative data methods (AMIE, and the global MHD simulation GUMICS-4. We show that the temporal and spatial variation of the Joule heating computed from the GUMICS-4 simulation is consistent with observational and statistical methods. However, the different observational methods do not give a consistent estimate for the magnitude of the global Joule heating. We suggest that multiplying the GUMICS-4 total Joule heating by a factor of 10 approximates the observed Joule heating reasonably well. The lesser amount of Joule heating in GUMICS-4 is essentially caused by weaker Region 2 currents and polar cap potentials. We also show by theoretical arguments that multiplying independent measurements of averaged electric fields and Pedersen conductances yields an overestimation of Joule heating.

    Keywords. Ionosphere (Auroral ionosphere; Modeling and forecasting; Electric fields and currents

  17. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  18. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  19. Topology for statistical modeling of petascale data.

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, Valerio (University of Utah, Salt Lake City, UT); Mascarenhas, Ajith Arthur; Rusek, Korben (Texas A& M University, College Station, TX); Bennett, Janine Camille; Levine, Joshua (University of Utah, Salt Lake City, UT); Pebay, Philippe Pierre; Gyulassy, Attila (University of Utah, Salt Lake City, UT); Thompson, David C.; Rojas, Joseph Maurice (Texas A& M University, College Station, TX)

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  20. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    Science.gov (United States)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  1. Implementation of Extended Statistical Entropy Analysis to the Effluent Quality Index of the Benchmarking Simulation Model No. 2

    Directory of Open Access Journals (Sweden)

    Alicja P. Sobańtka

    2014-01-01

    Full Text Available Extended statistical entropy analysis (eSEA is used to assess the nitrogen (N removal performance of the wastewater treatment (WWT simulation software, the Benchmarking Simulation Model No. 2 (BSM No. 2 . Six simulations with three different types of wastewater are carried out, which vary in the dissolved oxygen concentration (O2,diss. during the aerobic treatment. N2O emissions generated during denitrification are included in the model. The N-removal performance is expressed as reduction in statistical entropy, ΔH, compared to the hypothetical reference situation of direct discharge of the wastewater into the river. The parameters chemical and biological oxygen demand (COD, BOD and suspended solids (SS are analogously expressed in terms of reduction of COD, BOD, and SS, compared to a direct discharge of the wastewater to the river (ΔEQrest. The cleaning performance is expressed as ΔEQnew, the weighted average of ΔH and ΔEQrest. The results show that ΔEQnew is a more comprehensive indicator of the cleaning performance because, in contrast to the traditional effluent quality index (EQ, it considers the characteristics of the wastewater, includes all N-compounds and their distribution in the effluent, the off-gas, and the sludge. Furthermore, it is demonstrated that realistically expectable N2O emissions have only a moderate impact on ΔEQnew.

  2. Statistical Compression for Climate Model Output

    Science.gov (United States)

    Hammerling, D.; Guinness, J.; Soh, Y. J.

    2017-12-01

    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.

  3. Discrete event simulation of the ATLAS second level trigger

    International Nuclear Information System (INIS)

    Vermeulen, J.C.; Dankers, R.J.; Hunt, S.; Harris, F.; Hortnagl, C.; Erasov, A.; Bogaerts, A.

    1998-01-01

    Discrete event simulation is applied for determining the computing and networking resources needed for the ATLAS second level trigger. This paper discusses the techniques used and some of the results obtained so far for well defined laboratory configurations and for the full system

  4. Effect of higher order nonlinearity, directionality and finite water depth on wave statistics: Comparison of field data and numerical simulations

    Science.gov (United States)

    Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro

    2014-05-01

    This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if khthe aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.

  5. Implementing the “Big Data” Concept in Official Statistics

    Directory of Open Access Journals (Sweden)

    О. V.

    2017-02-01

    Full Text Available Big data is a huge resource that needs to be used at all levels of economic planning. The article is devoted to the study of the development of the concept of “Big Data” in the world and its impact on the transformation of statistical simulation of economic processes. Statistics at the current stage should take into account the complex system of international economic relations, which functions in the conditions of globalization and brings new forms of economic development in small open economies. Statistical science should take into account such phenomena as gig-economy, common economy, institutional factors, etc. The concept of “Big Data” and open data are analyzed, problems of implementation of “Big Data” in the official statistics are shown. The ways of implementation of “Big Data” in the official statistics of Ukraine through active use of technological opportunities of mobile operators, navigation systems, surveillance cameras, social networks, etc. are presented. The possibilities of using “Big Data” in different sectors of the economy, also on the level of companies are shown. The problems of storage of large volumes of data are highlighted. The study shows that “Big Data” is a huge resource that should be used across the Ukrainian economy.

  6. Statistical nature of non-Gaussianity from cubic order primordial perturbations: CMB map simulations and genus statistic

    International Nuclear Information System (INIS)

    Chingangbam, Pravabati; Park, Changbom

    2009-01-01

    We simulate CMB maps including non-Gaussianity arising from cubic order perturbations of the primordial gravitational potential, characterized by the non-linearity parameter g NL . The maps are used to study the characteristic nature of the resulting non-Gaussian temperature fluctuations. We measure the genus and investigate how it deviates from Gaussian shape as a function of g NL and smoothing scale. We find that the deviation of the non-Gaussian genus curve from the Gaussian one has an antisymmetric, sine function like shape, implying more hot and more cold spots for g NL > 0 and less of both for g NL NL and also exhibits mild increase as the smoothing scale increases. We further study other statistics derived from the genus, namely, the number of hot spots, the number of cold spots, combined number of hot and cold spots and the slope of the genus curve at mean temperature fluctuation. We find that these observables carry signatures of g NL that are clearly distinct from the quadratic order perturbations, encoded in the parameter f NL . Hence they can be very useful tools for distinguishing not only between non-Gaussian temperature fluctuations and Gaussian ones but also between g NL and f NL type non-Gaussianities

  7. Simulation, identification and statistical variation in cardiovascular analysis (SISCA) - A software framework for multi-compartment lumped modeling.

    Science.gov (United States)

    Huttary, Rudolf; Goubergrits, Leonid; Schütte, Christof; Bernhard, Stefan

    2017-08-01

    It has not yet been possible to obtain modeling approaches suitable for covering a wide range of real world scenarios in cardiovascular physiology because many of the system parameters are uncertain or even unknown. Natural variability and statistical variation of cardiovascular system parameters in healthy and diseased conditions are characteristic features for understanding cardiovascular diseases in more detail. This paper presents SISCA, a novel software framework for cardiovascular system modeling and its MATLAB implementation. The framework defines a multi-model statistical ensemble approach for dimension reduced, multi-compartment models and focuses on statistical variation, system identification and patient-specific simulation based on clinical data. We also discuss a data-driven modeling scenario as a use case example. The regarded dataset originated from routine clinical examinations and comprised typical pre and post surgery clinical data from a patient diagnosed with coarctation of aorta. We conducted patient and disease specific pre/post surgery modeling by adapting a validated nominal multi-compartment model with respect to structure and parametrization using metadata and MRI geometry. In both models, the simulation reproduced measured pressures and flows fairly well with respect to stenosis and stent treatment and by pre-treatment cross stenosis phase shift of the pulse wave. However, with post-treatment data showing unrealistic phase shifts and other more obvious inconsistencies within the dataset, the methods and results we present suggest that conditioning and uncertainty management of routine clinical data sets needs significantly more attention to obtain reasonable results in patient-specific cardiovascular modeling. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A Statistical Study of Serum Cholesterol Level by Gender and Race.

    Science.gov (United States)

    Tharu, Bhikhari Prasad; Tsokos, Chris P

    2017-07-25

    Cholesterol level (CL) is growing concerned as health issue in human health since it is considered one of the causes in heart diseases. A study of cholesterol level can provide insight about its nature and characteristics. A cross-sectional study. National Health and Nutrition Examination Survey (NHANS) II was conducted on a probability sample of approximately 28,000 persons in the USA and cholesterol level is obtained from laboratory results. Samples were selected so that certain population groups thought to be at high risk of malnutrition. Study included 11,864 persons for CL cases with 9,602 males and 2,262 females with races: whites, blacks, and others. Non-parametric statistical tests and goodness of fit test have been used to identify probability distributions. The study concludes that the cholesterol level exhibits significant racial and gender differences in terms of probability distributions. The study has concluded that white people are relatively higher at risk than black people to have risk line and high risk cholesterol. The study clearly indicates that black males normally have higher cholesterol. Females have lower variation in cholesterol than males. There exists gender and racial discrepancies in cholesterol which has been identified as lognormal and gamma probability distributions. White individuals seem to be at a higher risk of having high risk cholesterol level than blacks. Females tend to have higher variation in cholesterol level than males.

  9. Statistical properties of the linear σ model used in dynamical simulations of DCC formation

    International Nuclear Information System (INIS)

    Randrup, J.

    1997-01-01

    The present work develops a simple approximate framework for initializing and interpreting dynamical simulations with the linear σ model exploring the formation of disoriented chiral condensates in high-energy collisions. By enclosing the system in a rectangular box with periodic boundary conditions, it is possible to decompose uniquely the chiral field into its spatial average (the order parameter) and its fluctuations (the quasiparticles) which can be treated in the Hartree approximation. The quasiparticle modes are then described approximately by Klein-Gordon dispersion relations containing an effective mass depending on both the temperature and the magnitude of the order parameter; their fluctuations are instrumental in shaping the effective potential governing the order parameter, and the emerging statistical description is thermodynamicially consistent. The temperature dependence of the statistical distribution of the order parameter is discussed, as is the behavior of the associated effective masses; as the system is cooled, the field fluctuations subside, causing a smooth change from the high-temperature phase in which chiral symmetry is approximately restored towards the normal phase. Of practical interest is the fact that the equilibrium field configurations can be sampled in a simple manner, thus providing a convenient means for specifying the initial conditions in dynamical simulations of the nonequilibrium relaxation of the chiral field; in particular, the correlation function is much more realistic than those emerging in previous initialization methods. It is illustrated how such samples remain approximately invariant under propagation by the unapproximated equation of motion over times that are long on the scale of interest, thereby suggesting that the treatment is sufficiently accurate to be of practical utility. copyright 1997 The American Physical Society

  10. Differentiating levels of surgical experience on a virtual reality temporal bone simulator.

    Science.gov (United States)

    Zhao, Yi C; Kennedy, Gregor; Hall, Richard; O'Leary, Stephen

    2010-11-01

    Virtual reality simulation is increasingly being incorporated into surgical training and may have a role in temporal bone surgical education. Here we test whether metrics generated by a virtual reality surgical simulation can differentiate between three levels of experience, namely novices, otolaryngology residents, and experienced qualified surgeons. Cohort study. Royal Victorian Eye and Ear Hospital. Twenty-seven participants were recruited. There were 12 experts, six residents, and nine novices. After orientation, participants were asked to perform a modified radical mastoidectomy on the simulator. Comparisons of time taken, injury to structures, and forces exerted were made between the groups to determine which specific metrics would discriminate experience levels. Experts completed the simulated task in significantly shorter time than the other two groups (experts 22 minutes, residents 36 minutes, and novices 46 minutes; P = 0.001). Novices exerted significantly higher average forces when dissecting close to vital structures compared with experts (0.24 Newton [N] vs 0.13 N, P = 0.002). Novices were also more likely to injure structures such as dura compared to experts (23 injuries vs 3 injuries, P = 0.001). Compared with residents, the experts modulated their force between initial cortex dissection and dissection close to vital structures. Using the combination of these metrics, we were able to correctly classify the participants' level of experience 90 percent of the time. This preliminary study shows that measurements of performance obtained from within a virtual reality simulator can differentiate between levels of users' experience. These results suggest that simulator training may have a role in temporal bone training beyond foundational training. Copyright © 2010 American Academy of Otolaryngology–Head and Neck Surgery Foundation. Published by Mosby, Inc. All rights reserved.

  11. Using the Δ3 statistic to test for missed levels in mixed sequence neutron resonance data

    International Nuclear Information System (INIS)

    Mulhall, Declan

    2009-01-01

    The Δ 3 (L) statistic is studied as a tool to detect missing levels in the neutron resonance data where two sequences are present. These systems are problematic because there is no level repulsion, and the resonances can be too close to resolve. Δ 3 (L) is a measure of the fluctuations in the number of levels in an interval of length L on the energy axis. The method used is tested on ensembles of mixed Gaussian orthogonal ensemble spectra, with a known fraction of levels (x%) randomly depleted, and can accurately return x. The accuracy of the method as a function of spectrum size is established. The method is used on neutron resonance data for 11 isotopes with either s-wave neutrons on odd-A isotopes, or p-wave neutrons on even-A isotopes. The method compares favorably with a maximum likelihood method applied to the level spacing distribution. Nuclear data ensembles were made from 20 isotopes in total, and their Δ 3 (L) statistics are discussed in the context of random matrix theory.

  12. Powder technological vitrification of simulated high-level waste

    International Nuclear Information System (INIS)

    Gahlert, S.

    1988-03-01

    High-level waste simulate from the reprocessing of light water reactor and fast breeder fuel was vitrified by powder technology. After denitration with formaldehyde, the simulated HLW is mixed with glass frit and simultaneously dried in an oil-heated mixer. After 'in-can calcination' for at least 24 hours at 850 or 950 K (depending on the type of waste and glass), the mixture is hot-pressed in-can for several hours at 920 or 1020 K respectively, at pressures between 0.4 and 1.0 MPa. The technology has been demonstrated inactively up to diameters of 30 cm. Leach resistance is significantly enhanced when compared to common borosilicate glasses by the utilization of glasses with higher silicon and aluminium content and lower sodium content. (orig.) [de

  13. MQSA National Statistics

    Science.gov (United States)

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

  14. Spreadsheets as tools for statistical computing and statistics education

    OpenAIRE

    Neuwirth, Erich

    2000-01-01

    Spreadsheets are an ubiquitous program category, and we will discuss their use in statistics and statistics education on various levels, ranging from very basic examples to extremely powerful methods. Since the spreadsheet paradigm is very familiar to many potential users, using it as the interface to statistical methods can make statistics more easily accessible.

  15. Statistics of excitations in the electron glass model

    Science.gov (United States)

    Palassini, Matteo

    2011-03-01

    We study the statistics of elementary excitations in the classical electron glass model of localized electrons interacting via the unscreened Coulomb interaction in the presence of disorder. We reconsider the long-standing puzzle of the exponential suppression of the single-particle density of states near the Fermi level, by measuring accurately the density of states of charged and electron-hole pair excitations via finite temperature Monte Carlo simulation and zero-temperature relaxation. We also investigate the statistics of large charge rearrangements after a perturbation of the system, which may shed some light on the slow relaxation and glassy phenomena recently observed in a variety of Anderson insulators. In collaboration with Martin Goethe.

  16. Level-statistics in Disordered Systems: A single parametric scaling and Connection to Brownian Ensembles

    OpenAIRE

    Shukla, Pragya

    2004-01-01

    We find that the statistics of levels undergoing metal-insulator transition in systems with multi-parametric Gaussian disorders and non-interacting electrons behaves in a way similar to that of the single parametric Brownian ensembles \\cite{dy}. The latter appear during a Poisson $\\to$ Wigner-Dyson transition, driven by a random perturbation. The analogy provides the analytical evidence for the single parameter scaling of the level-correlations in disordered systems as well as a tool to obtai...

  17. Plant-Level Modeling and Simulation of Used Nuclear Fuel Dissolution

    Energy Technology Data Exchange (ETDEWEB)

    de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2012-09-07

    Plant-level modeling and simulation of a used nuclear fuel prototype dissolver is presented. Emphasis is given in developing a modeling and simulation approach to be explored by other processes involved in the recycle of used fuel. The commonality concepts presented in a previous communication were used to create a model and realize its software module. An initial model was established based on a theory of chemical thermomechanical network transport outlined previously. A software module prototype was developed with the required external behavior and internal mathematical structure. Results obtained demonstrate the generality of the design approach and establish an extensible mathematical model with its corresponding software module for a wide range of dissolvers. Scale up numerical tests were made varying the type of used fuel (breeder and light-water reactors) and the capacity of dissolution (0.5 t/d to 1.7 t/d). These tests were motivated by user requirements in the area of nuclear materials safeguards. A computer module written in high-level programing languages (MATLAB and Octave) was developed, tested, and provided as open-source code (MATLAB) for integration into the Separations and Safeguards Performance Model application in development at Sandia National Laboratories. The modeling approach presented here is intended to serve as a template for a rational modeling of all plant-level modules. This will facilitate the practical application of the commonality features underlying the unifying network transport theory proposed recently. In addition, by example, this model describes, explicitly, the needed data from sub-scale models, and logical extensions for future model development. For example, from thermodynamics, an off-line simulation of molecular dynamics could quantify partial molar volumes for the species in the liquid phase; this simulation is currently at reach for high-performance computing. From fluid mechanics, a hold-up capacity function is needed

  18. Groundwater-level change and evaluation of simulated water levels for irrigated areas in Lahontan Valley, Churchill County, west-central Nevada, 1992 to 2012

    Science.gov (United States)

    Smith, David W.; Buto, Susan G.; Welborn, Toby L.

    2016-09-14

    The acquisition and transfer of water rights to wetland areas of Lahontan Valley, Nevada, has caused concern over the potential effects on shallow aquifer water levels. In 1992, water levels in Lahontan Valley were measured to construct a water-table map of the shallow aquifer prior to the effects of water-right transfers mandated by the Fallon Paiute-Shoshone Tribal Settlement Act of 1990 (Public Law 101-618, 104 Stat. 3289). From 1992 to 2012, approximately 11,810 water-righted acres, or 34,356 acre-feet of water, were acquired and transferred to wetland areas of Lahontan Valley. This report documents changes in water levels measured during the period of water-right transfers and presents an evaluation of five groundwater-flow model scenarios that simulated water-level changes in Lahontan Valley in response to water-right transfers and a reduction in irrigation season length by 50 percent.Water levels measured in 98 wells from 2012 to 2013 were used to construct a water-table map. Water levels in 73 of the 98 wells were compared with water levels measured in 1992 and used to construct a water-level change map. Water-level changes in the 73 wells ranged from -16.2 to 4.1 feet over the 20-year period. Rises in water levels in Lahontan Valley may correspond to annual changes in available irrigation water, increased canal flows after the exceptionally dry and shortened irrigation season of 1992, and the increased conveyance of water rights transferred to Stillwater National Wildlife Refuge. Water-level declines generally occurred near the boundary of irrigated areas and may be associated with groundwater pumping, water-right transfers, and inactive surface-water storage reservoirs. The largest water-level declines were in the area near Carson Lake.Groundwater-level response to water-right transfers was evaluated by comparing simulated and observed water-level changes for periods representing water-right transfers and a shortened irrigation season in areas near Fallon

  19. New Hybrid Monte Carlo methods for efficient sampling. From physics to biology and statistics

    International Nuclear Information System (INIS)

    Akhmatskaya, Elena; Reich, Sebastian

    2011-01-01

    We introduce a class of novel hybrid methods for detailed simulations of large complex systems in physics, biology, materials science and statistics. These generalized shadow Hybrid Monte Carlo (GSHMC) methods combine the advantages of stochastic and deterministic simulation techniques. They utilize a partial momentum update to retain some of the dynamical information, employ modified Hamiltonians to overcome exponential performance degradation with the system’s size and make use of multi-scale nature of complex systems. Variants of GSHMCs were developed for atomistic simulation, particle simulation and statistics: GSHMC (thermodynamically consistent implementation of constant-temperature molecular dynamics), MTS-GSHMC (multiple-time-stepping GSHMC), meso-GSHMC (Metropolis corrected dissipative particle dynamics (DPD) method), and a generalized shadow Hamiltonian Monte Carlo, GSHmMC (a GSHMC for statistical simulations). All of these are compatible with other enhanced sampling techniques and suitable for massively parallel computing allowing for a range of multi-level parallel strategies. A brief description of the GSHMC approach, examples of its application on high performance computers and comparison with other existing techniques are given. Our approach is shown to resolve such problems as resonance instabilities of the MTS methods and non-preservation of thermodynamic equilibrium properties in DPD, and to outperform known methods in sampling efficiency by an order of magnitude. (author)

  20. NASA System-Level Design, Analysis and Simulation Tools Research on NextGen

    Science.gov (United States)

    Bardina, Jorge

    2011-01-01

    A review of the research accomplished in 2009 in the System-Level Design, Analysis and Simulation Tools (SLDAST) of the NASA's Airspace Systems Program is presented. This research thrust focuses on the integrated system-level assessment of component level innovations, concepts and technologies of the Next Generation Air Traffic System (NextGen) under research in the ASP program to enable the development of revolutionary improvements and modernization of the National Airspace System. The review includes the accomplishments on baseline research and the advancements on design studies and system-level assessment, including the cluster analysis as an annualization standard of the air traffic in the U.S. National Airspace, and the ACES-Air MIDAS integration for human-in-the-loop analyzes within the NAS air traffic simulation.

  1. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  2. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  3. Noise level and MPEG-2 encoder statistics

    Science.gov (United States)

    Lee, Jungwoo

    1997-01-01

    Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.

  4. Understanding Statistics and Statistics Education: A Chinese Perspective

    Science.gov (United States)

    Shi, Ning-Zhong; He, Xuming; Tao, Jian

    2009-01-01

    In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…

  5. Método estadístico factorial a dos niveles aplicado en los experimentos de explotación de los Simuladores. // Two levels factorial statistical method applied in experiments of Simulators operation.

    Directory of Open Access Journals (Sweden)

    P. Moreno Quintana

    2002-05-01

    Full Text Available La introducción de los simuladores en el proceso de instrucción en el país cuenta con una dificultad al no conocerse el realimpacto de este tipo de equipamiento en la adquisición de las habilidades dentro del proceso de entrenamiento a que vadirigido.El empleo del método estadístico factorial a dos niveles permite la obtención de un modelo lineal de respuesta de laeficiencia, o calificación, en función de la forma cuantitativa de empleo de los diversos medios de entrenamiento y suscombinaciones.Este modelo es validado con un nivel de confianza calculado y puede ser optimizado por los métodos matemáticoscorrespondientes. Para esto se realiza un grupo de recomendaciones en la organización de los experimentos que han sidoobtenidos durante la aplicación de este método en diversas ocasiones.Palabras claves: Simuladores, modelación matemática, diseño de experimentos._____________________________________________________________________________Abstract.The introduction of simulators in the country´s instruction process deals with the difficulty of not knowing the real impactof this equipment in the acquisition of abilities within the training process. The use of the factorial statistical method at twolevels allows the obtaining of a linear model with answer about efficiency or qualification based on the quantitative form ofuse of diverse means of training and its combinations. This model is validated with a calculated level of confidence and canbe optimized by the corresponding mathematical methods. For this a group of recommendations is made in the organizationof experiments that have been obtained during the application of this method in diverse combnations.Key words: Simulators, mathematical modelation, experiment design.

  6. Low-level contrast statistics of natural images can modulate the frequency of event-related potentials (ERP in humans

    Directory of Open Access Journals (Sweden)

    Masoud Ghodrati

    2016-12-01

    Full Text Available Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs’ power within theta frequency band (~3-7 Hz. This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception.

  7. A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals

    KAUST Repository

    Mohamed, Mamdouh S.

    2015-05-18

    The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoretical analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kröner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.

  8. A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals

    KAUST Repository

    Mohamed, Mamdouh S.; Larson, Ben C.; Tischler, Jon Z.; El-Azab, Anter

    2015-01-01

    The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoretical analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kröner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.

  9. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  10. Leaching behavior of simulated high-level waste glass

    International Nuclear Information System (INIS)

    Kamizono, Hiroshi

    1987-03-01

    The author's work in the study on the leaching behavior of simulated high-level waste (HLW) glass were summarized. The subjects described are (1) leach rates at high temperatures, (2) effects of cracks on leach rates, (3) effects of flow rate on leach rates, and (4) an in-situ burial test in natural groundwater. In the following section, the leach rates obtained by various experiments were summarized and discussed. (author)

  11. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  12. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  13. Statistical characterization of global Sea Surface Salinity for SMOS level 3 and 4 products

    Science.gov (United States)

    Gourrion, J.; Aretxabaleta, A. L.; Ballabrera, J.; Mourre, B.

    2009-04-01

    The Soil Moisture and Ocean Salinity (SMOS) mission of the European Space Agency will soon provide sea surface salinity (SSS) estimates to the scientific community. Because of the numerous geophysical contamination sources and the instrument complexity, the salinity products will have a low signal to noise ratio at level 2 (individual estimates??) that is expected to increase up to mission requirements (0.1 psu) at level 3 (global maps with regular distribution) after spatio-temporal accumulation of the observations. Geostatistical methods such as Optimal Interpolation are being implemented at the level 3/4 production centers to operate this noise reduction step. The methodologies require auxiliary information about SSS statistics that, under Gaussian assumption, consist in the mean field and the covariance of the departures from it. The present study is a contribution to the definition of the best estimates for mean field and covariances to be used in the near-future SMOS level 3 and 4 products. We use complementary information from sparse in-situ observations and imperfect outputs from state-of-art model simulations. Various estimates of the mean field are compared. An alternative is the use of a SSS climatology such as the one provided by the World Ocean Atlas 2005. An historical SSS dataset from the World Ocean Database 2005 is reanalyzed and combined with the recent global observations obtained by the Array for Real-Time Geostrophic Oceanography (ARGO). Regional tendencies in the long-term temporal evolution of the near-surface ocean salinity are evident, suggesting that the use of a SSS climatology to describe the current mean field may introduce biases of magnitude similar to the precision goal. Consequently, a recent SSS dataset may be preferred to define the mean field needed for SMOS level 3 and 4 production. The in-situ observation network allows a global mapping of the low frequency component of the variability, i.e. decadal, interannual and seasonal

  14. Cluster Statistics of BTW Automata

    International Nuclear Information System (INIS)

    Ajanta Bhowal Acharyya

    2011-01-01

    The cluster statistics of BTW automata in the SOC states are obtained by extensive computer simulation. Various moments of the clusters are calculated and few results are compared with earlier available numerical estimates and exact results. Reasonably good agreement is observed. An extended statistical analysis has been made. (author)

  15. Solar radiation data - statistical analysis and simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Mustacchi, C; Cena, V; Rocchi, M; Haghigat, F

    1984-01-01

    The activities consisted in collecting meteorological data on magnetic tape for ten european locations (with latitudes ranging from 42/sup 0/ to 56/sup 0/ N), analysing the multi-year sequences, developing mathematical models to generate synthetic sequences having the same statistical properties of the original data sets, and producing one or more Short Reference Years (SRY's) for each location. The meteorological parameters examinated were (for all the locations) global + diffuse radiation on horizontal surface, dry bulb temperature, sunshine duration. For some of the locations additional parameters were available, namely, global, beam and diffuse radiation on surfaces other than horizontal, wet bulb temperature, wind velocity, cloud type, cloud cover. The statistical properties investigated were mean, variance, autocorrelation, crosscorrelation with selected parameters, probability density function. For all the meteorological parameters, various mathematical models were built: linear regression, stochastic models of the AR and the DAR type. In each case, the model with the best statistical behaviour was selected for the production of a SRY for the relevant parameter/location.

  16. Heterogeneous Rock Simulation Using DIP-Micromechanics-Statistical Methods

    Directory of Open Access Journals (Sweden)

    H. Molladavoodi

    2018-01-01

    Full Text Available Rock as a natural material is heterogeneous. Rock material consists of minerals, crystals, cement, grains, and microcracks. Each component of rock has a different mechanical behavior under applied loading condition. Therefore, rock component distribution has an important effect on rock mechanical behavior, especially in the postpeak region. In this paper, the rock sample was studied by digital image processing (DIP, micromechanics, and statistical methods. Using image processing, volume fractions of the rock minerals composing the rock sample were evaluated precisely. The mechanical properties of the rock matrix were determined based on upscaling micromechanics. In order to consider the rock heterogeneities effect on mechanical behavior, the heterogeneity index was calculated in a framework of statistical method. A Weibull distribution function was fitted to the Young modulus distribution of minerals. Finally, statistical and Mohr–Coulomb strain-softening models were used simultaneously as a constitutive model in DEM code. The acoustic emission, strain energy release, and the effect of rock heterogeneities on the postpeak behavior process were investigated. The numerical results are in good agreement with experimental data.

  17. Statistical framework for evaluation of climate model simulations by use of climate proxy data from the last millennium – Part 2: A pseudo-proxy study addressing the amplitude of solar forcing

    Directory of Open Access Journals (Sweden)

    A. Hind

    2012-08-01

    Full Text Available The statistical framework of Part 1 (Sundberg et al., 2012, for comparing ensemble simulation surface temperature output with temperature proxy and instrumental records, is implemented in a pseudo-proxy experiment. A set of previously published millennial forced simulations (Max Planck Institute – COSMOS, including both "low" and "high" solar radiative forcing histories together with other important forcings, was used to define "true" target temperatures as well as pseudo-proxy and pseudo-instrumental series. In a global land-only experiment, using annual mean temperatures at a 30-yr time resolution with realistic proxy noise levels, it was found that the low and high solar full-forcing simulations could be distinguished. In an additional experiment, where pseudo-proxies were created to reflect a current set of proxy locations and noise levels, the low and high solar forcing simulations could only be distinguished when the latter served as targets. To improve detectability of the low solar simulations, increasing the signal-to-noise ratio in local temperature proxies was more efficient than increasing the spatial coverage of the proxy network. The experiences gained here will be of guidance when these methods are applied to real proxy and instrumental data, for example when the aim is to distinguish which of the alternative solar forcing histories is most compatible with the observed/reconstructed climate.

  18. Statistical Indicators for Religious Studies: Indicators of Level and Structure

    Science.gov (United States)

    Herteliu, Claudiu; Isaic-Maniu, Alexandru

    2009-01-01

    Using statistic indicators as vectors of information relative to the operational status of a phenomenon, including a religious one, is unanimously accepted. By introducing a system of statistic indicators we can also analyze the interfacing areas of a phenomenon. In this context, we have elaborated a system of statistic indicators specific to the…

  19. Simulating European wind power generation applying statistical downscaling to reanalysis data

    International Nuclear Information System (INIS)

    González-Aparicio, I.; Monforti, F.; Volker, P.; Zucker, A.; Careri, F.; Huld, T.; Badger, J.

    2017-01-01

    Highlights: •Wind speed spatial resolution highly influences calculated wind power peaks and ramps. •Reduction of wind power generation uncertainties using statistical downscaling. •Publicly available dataset of wind power generation hourly time series at NUTS2. -- Abstract: The growing share of electricity production from solar and mainly wind resources constantly increases the stochastic nature of the power system. Modelling the high share of renewable energy sources – and in particular wind power – crucially depends on the adequate representation of the intermittency and characteristics of the wind resource which is related to the accuracy of the approach in converting wind speed data into power values. One of the main factors contributing to the uncertainty in these conversion methods is the selection of the spatial resolution. Although numerical weather prediction models can simulate wind speeds at higher spatial resolution (up to 1 × 1 km) than a reanalysis (generally, ranging from about 25 km to 70 km), they require high computational resources and massive storage systems: therefore, the most common alternative is to use the reanalysis data. However, local wind features could not be captured by the use of a reanalysis technique and could be translated into misinterpretations of the wind power peaks, ramping capacities, the behaviour of power prices, as well as bidding strategies for the electricity market. This study contributes to the understanding what is captured by different wind speeds spatial resolution datasets, the importance of using high resolution data for the conversion into power and the implications in power system analyses. It is proposed a methodology to increase the spatial resolution from a reanalysis. This study presents an open access renewable generation time series dataset for the EU-28 and neighbouring countries at hourly intervals and at different geographical aggregation levels (country, bidding zone and administrative

  20. Statistical perspectives on inverse problems

    DEFF Research Database (Denmark)

    Andersen, Kim Emil

    of the interior of an object from electrical boundary measurements. One part of this thesis concerns statistical approaches for solving, possibly non-linear, inverse problems. Thus inverse problems are recasted in a form suitable for statistical inference. In particular, a Bayesian approach for regularisation...... problem is given in terms of probability distributions. Posterior inference is obtained by Markov chain Monte Carlo methods and new, powerful simulation techniques based on e.g. coupled Markov chains and simulated tempering is developed to improve the computational efficiency of the overall simulation......Inverse problems arise in many scientific disciplines and pertain to situations where inference is to be made about a particular phenomenon from indirect measurements. A typical example, arising in diffusion tomography, is the inverse boundary value problem for non-invasive reconstruction...

  1. Simulation of deep one- and two-dimensional redshift surveys

    Science.gov (United States)

    Park, Changbom; Gott, J. Richard, III

    1991-03-01

    It is shown that slice or pencil-beam redshift surveys of galaxies can be simulated in a box with nonequal sides. This method saves a lot of computer time and memory while providing essentially the same results as from whole-cube simulations. A 2457.6/h Mpc-long rod (out to a redshift z = 0.58 in two opposite directions) is simulated using the standard biased cold dark matter model as an example to mimic the recent deep pencil-beam surveys by Broadhurst et al. (1990). The structures (spikes) seen in these simulated samples occur when the narrow pencil-beam pierces walls, filaments, and clusters appearing randomly along the line-of-sight. A statistical test for goodness of fit to a periodic lattice has been applied to the observations and the simulations. It is found that the statistical significance level (P = 15.4 percent) is not strong enough to reject the null hypothesis that the observations and the simulations were drawn at random from the same set.

  2. Non-statistically populated autoionizing levels of Li-like carbon: Hidden-crossings

    International Nuclear Information System (INIS)

    Deveney, E.F.; Krause, H.F.; Jones, N.L.

    1995-01-01

    The intensities of the Auger-electron lines from autoionizing (AI) states of Li-like (1s2s2l) configurations excited in ion-atom collisions vary as functions of the collision parameters such as, for example, the collision velocity. A statistical population of the three-electron levels is at best incomplete and underscores the intricate dynamical development of the electronic states. The authors compare several experimental studies to calculations using ''hidden-crossing'' techniques to explore some of the details of these Auger-electron intensity variation phenomena. The investigations show promising results suggesting that Auger-electron intensity variations can be used to probe collision dynamics

  3. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  4. The Effect of Project-Based Learning on Students' Statistical Literacy Levels for Data Representation

    Science.gov (United States)

    Koparan, Timur; Güven, Bülent

    2015-01-01

    The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35…

  5. Introductory statistical inference

    CERN Document Server

    Mukhopadhyay, Nitis

    2014-01-01

    This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist

  6. Multi-level Simulation of a Real Time Vibration Monitoring System Component

    Science.gov (United States)

    Robertson, Bryan A.; Wilkerson, Delisa

    2005-01-01

    This paper describes the development of a custom built Digital Signal Processing (DSP) printed circuit board designed to implement the Advanced Real Time Vibration Monitoring Subsystem proposed by Marshall Space Flight Center (MSFC) Transportation Directorate in 2000 for the Space Shuttle Main Engine Advanced Health Management System (AHMS). This Real Time Vibration Monitoring System (RTVMS) is being developed for ground use as part of the AHMS Health Management Computer-Integrated Rack Assembly (HMC-IRA). The HMC-IRA RTVMS design contains five DSPs which are highly interconnected through individual communication ports, shared memory, and a unique communication router that allows all the DSPs to receive digitized data fiom two multi-channel analog boards simultaneously. This paper will briefly cover the overall board design but will focus primarily on the state-of-the-art simulation environment within which this board was developed. This 16-layer board with over 1800 components and an additional mezzanine card has been an extremely challenging design. Utilization of a Mentor Graphics simulation environment provided the unique board and system level simulation capability to ascertain any timing or functional concerns before production. By combining VHDL, Synopsys Software and Hardware Models, and the Mentor Design Capture Environment, multiple simulations were developed to verify the RTVMS design. This multi-level simulation allowed the designers to achieve complete operability without error the first time the RTVMS printed circuit board was powered. The HMC-IRA design has completed all engineering and deliverable unit testing. P

  7. An investigation of the trade-off between the count level and image quality in myocardial perfusion SPECT using simulated images: the effects of statistical noise and object variability on defect detectability

    International Nuclear Information System (INIS)

    He Xin; Links, Jonathan M; Frey, Eric C

    2010-01-01

    Quantum noise as well as anatomic and uptake variability in patient populations limits observer performance on a defect detection task in myocardial perfusion SPECT (MPS). The goal of this study was to investigate the relative importance of these two effects by varying acquisition time, which determines the count level, and assessing the change in performance on a myocardial perfusion (MP) defect detection task using both mathematical and human observers. We generated ten sets of projections of a simulated patient population with count levels ranging from 1/128 to around 15 times a typical clinical count level to simulate different levels of quantum noise. For the simulated population we modeled variations in patient, heart and defect size, heart orientation and shape, defect location, organ uptake ratio, etc. The projection data were reconstructed using the OS-EM algorithm with no compensation or with attenuation, detector response and scatter compensation (ADS). The images were then post-filtered and reoriented to generate short-axis slices. A channelized Hotelling observer (CHO) was applied to the short-axis images, and the area under the receiver operating characteristics (ROC) curve (AUC) was computed. For each noise level and reconstruction method, we optimized the number of iterations and cutoff frequencies of the Butterworth filter to maximize the AUC. Using the images obtained with the optimal iteration and cutoff frequency and ADS compensation, we performed human observer studies for four count levels to validate the CHO results. Both CHO and human observer studies demonstrated that observer performance was dependent on the relative magnitude of the quantum noise and the patient variation. When the count level was high, the patient variation dominated, and the AUC increased very slowly with changes in the count level for the same level of anatomic variability. When the count level was low, however, quantum noise dominated, and changes in the count level

  8. Optimum Safety Levels for Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Sørensen, John Dalsgaard

    2005-01-01

    Optimum design safety levels for rock and cube armoured rubble mound breakwaters without superstructure are investigated by numerical simulations on the basis of minimization of the total costs over the service life of the structure, taking into account typical uncertainties related to wave...... statistics and structure response. The study comprises the influence of interest rate, service lifetime, downtime costs and damage accumulation. Design limit states and safety classes for breakwaters are discussed. The results indicate that optimum safety levels are somewhat higher than the safety levels...

  9. A nonparametric spatial scan statistic for continuous data.

    Science.gov (United States)

    Jung, Inkyung; Cho, Ho Jin

    2015-10-20

    Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.

  10. The statistical interpretations of counting data from measurements of low-level radioactivity

    International Nuclear Information System (INIS)

    Donn, J.J.; Wolke, R.L.

    1977-01-01

    The statistical model appropriate to measurements of low-level or background-dominant radioactivity is examined and the derived relationships are applied to two practical problems involving hypothesis testing: 'Does the sample exhibit a net activity above background' and 'Is the activity of the sample below some preselected limit'. In each of these cases, the appropriate decision rule is formulated, procedures are developed for estimating the preset count which is necessary to achieve a desired probability of detection, and a specific sequence of operations is provided for the worker in the field. (author)

  11. Hierarchical Statistical 3D ' Atomistic' Simulation of Decanano MOSFETs: Drift-Diffusion, Hydrodynamic and Quantum Mechanical Approaches

    Science.gov (United States)

    Asenov, Asen; Brown, A. R.; Slavcheva, G.; Davies, J. H.

    2000-01-01

    When MOSFETs are scaled to deep submicron dimensions the discreteness and randomness of the dopant charges in the channel region introduces significant fluctuations in the device characteristics. This effect, predicted 20 year ago, has been confirmed experimentally and in simulation studies. The impact of the fluctuations on the functionality, yield, and reliability of the corresponding systems shifts the paradigm of the numerical device simulation. It becomes insufficient to simulate only one device representing one macroscopical design in a continuous charge approximation. An ensemble of macroscopically identical but microscopically different devices has to be characterized by simulation of statistically significant samples. The aims of the numerical simulations shift from predicting the characteristics of a single device with continuous doping towards estimating the mean values and the standard deviations of basic design parameters such as threshold voltage, subthreshold slope, transconductance, drive current, etc. for the whole ensemble of 'atomistically' different devices in the system. It has to be pointed out that even the mean values obtained from 'atomistic' simulations are not identical to the values obtained from continuous doping simulations. In this paper we present a hierarchical approach to the 'atomistic' simulation of aggressively scaled decanano MOSFETs. A full scale 3D drift-diffusion'atomostic' simulation approach is first described and used for verification of the more economical, but also more restricted, options. To reduce the processor time and memory requirements at high drain voltage we have developed a self-consistent option based on a thin slab solution of the current continuity equation only in the channel region. This is coupled to the Poisson's equation solution in the whole simulation domain in the Gummel iteration cycles. The accuracy of this approach is investigated in comparison with the full self-consistent solution. At low drain

  12. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  13. Cluster size statistic and cluster mass statistic: two novel methods for identifying changes in functional connectivity between groups or conditions.

    Science.gov (United States)

    Ing, Alex; Schwarzbauer, Christian

    2014-01-01

    Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.

  14. Solution of the statistical bootstrap with Bose statistics

    International Nuclear Information System (INIS)

    Engels, J.; Fabricius, K.; Schilling, K.

    1977-01-01

    A brief and transparent way to introduce Bose statistics into the statistical bootstrap of Hagedorn and Frautschi is presented. The resulting bootstrap equation is solved by a cluster expansion for the grand canonical partition function. The shift of the ultimate temperature due to Bose statistics is determined through an iteration process. We discuss two-particle spectra of the decaying fireball (with given mass) as obtained from its grand microcanonical level density

  15. Review of methods for level density estimation from resonance parameters

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1983-01-01

    A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)

  16. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  17. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  18. A statistical method to get surface level air-temperature from satellite observations of precipitable water

    Digital Repository Service at National Institute of Oceanography (India)

    Pankajakshan, T.; Shikauchi, A; Sugimori, Y.; Kubota, M.

    -T a and precipitable water. The rms errors of the SSMI-T a , in this case are found to be reduced to 1.0°C. 1. Introduction Satellite derived surface-level meteorological parameters are considered to be a better alternative to sparse ship... Vol. 49, pp. 551 to 558. 1993 A Statistical Method to Get Surface Level Air-Temperature from Satellite Observations of Precipitable Water PANKAJAKSHAN THADATHIL*, AKIRA SHIKAUCHI, YASUHIRO SUGIMORI and MASAHISA KUBOTA School of Marine Science...

  19. Ten Years of Cloud Properties from MODIS: Global Statistics and Use in Climate Model Evaluation

    Science.gov (United States)

    Platnick, Steven E.

    2011-01-01

    The NASA Moderate Resolution Imaging Spectroradiometer (MODIS), launched onboard the Terra and Aqua spacecrafts, began Earth observations on February 24, 2000 and June 24,2002, respectively. Among the algorithms developed and applied to this sensor, a suite of cloud products includes cloud masking/detection, cloud-top properties (temperature, pressure), and optical properties (optical thickness, effective particle radius, water path, and thermodynamic phase). All cloud algorithms underwent numerous changes and enhancements between for the latest Collection 5 production version; this process continues with the current Collection 6 development. We will show example MODIS Collection 5 cloud climatologies derived from global spatial . and temporal aggregations provided in the archived gridded Level-3 MODIS atmosphere team product (product names MOD08 and MYD08 for MODIS Terra and Aqua, respectively). Data sets in this Level-3 product include scalar statistics as well as 1- and 2-D histograms of many cloud properties, allowing for higher order information and correlation studies. In addition to these statistics, we will show trends and statistical significance in annual and seasonal means for a variety of the MODIS cloud properties, as well as the time required for detection given assumed trends. To assist in climate model evaluation, we have developed a MODIS cloud simulator with an accompanying netCDF file containing subsetted monthly Level-3 statistical data sets that correspond to the simulator output. Correlations of cloud properties with ENSO offer the potential to evaluate model cloud sensitivity; initial results will be discussed.

  20. Python for probability, statistics, and machine learning

    CERN Document Server

    Unpingco, José

    2016-01-01

    This book covers the key ideas that link probability, statistics, and machine learning illustrated using Python modules in these areas. The entire text, including all the figures and numerical results, is reproducible using the Python codes and their associated Jupyter/IPython notebooks, which are provided as supplementary downloads. The author develops key intuitions in machine learning by working meaningful examples using multiple analytical methods and Python codes, thereby connecting theoretical concepts to concrete implementations. Modern Python modules like Pandas, Sympy, and Scikit-learn are applied to simulate and visualize important machine learning concepts like the bias/variance trade-off, cross-validation, and regularization. Many abstract mathematical ideas, such as convergence in probability theory, are developed and illustrated with numerical examples. This book is suitable for anyone with an undergraduate-level exposure to probability, statistics, or machine learning and with rudimentary knowl...

  1. GENUS STATISTICS USING THE DELAUNAY TESSELLATION FIELD ESTIMATION METHOD. I. TESTS WITH THE MILLENNIUM SIMULATION AND THE SDSS DR7

    International Nuclear Information System (INIS)

    Zhang Youcai; Yang Xiaohu; Springel, Volker

    2010-01-01

    We study the topology of cosmic large-scale structure through the genus statistics, using galaxy catalogs generated from the Millennium Simulation and observational data from the latest Sloan Digital Sky Survey Data Release (SDSS DR7). We introduce a new method for constructing galaxy density fields and for measuring the genus statistics of its isodensity surfaces. It is based on a Delaunay tessellation field estimation (DTFE) technique that allows the definition of a piece-wise continuous density field and the exact computation of the topology of its polygonal isodensity contours, without introducing any free numerical parameter. Besides this new approach, we also employ the traditional approaches of smoothing the galaxy distribution with a Gaussian of fixed width, or by adaptively smoothing with a kernel that encloses a constant number of neighboring galaxies. Our results show that the Delaunay-based method extracts the largest amount of topological information. Unlike the traditional approach for genus statistics, it is able to discriminate between the different theoretical galaxy catalogs analyzed here, both in real space and in redshift space, even though they are based on the same underlying simulation model. In particular, the DTFE approach detects with high confidence a discrepancy of one of the semi-analytic models studied here compared with the SDSS data, while the other models are found to be consistent.

  2. Statistical analysis of dimer formation in supersaturated metal vapor based on molecular dynamics simulation

    Science.gov (United States)

    Korenchenko, Anna E.; Vorontsov, Alexander G.; Gelchinski, Boris R.; Sannikov, Grigorii P.

    2018-04-01

    We discuss the problem of dimer formation during the homogeneous nucleation of atomic metal vapor in an inert gas environment. We simulated nucleation with molecular dynamics and carried out the statistical analysis of double- and triple-atomic collisions as the two ways of long-lived diatomic complex formation. Close pair of atoms with lifetime greater than the mean time interval between atom-atom collisions is called a long-lived diatomic complex. We found that double- and triple-atomic collisions gave approximately the same probabilities of long-lived diatomic complex formation, but internal energy of the resulted state was essentially lower in the second case. Some diatomic complexes formed in three-particle collisions are stable enough to be a critical nucleus.

  3. Digitization and simulation realization of full range control system for steam generator water level

    International Nuclear Information System (INIS)

    Qian Hong; Ye Jianhua; Qian Fei; Li Chao

    2010-01-01

    In this paper, a full range digital control system for the steam generator water level is designed by a control scheme of single element control and three-element cascade feed-forward control, and the method to use the software module configuration is proposed to realize the water level control strategy. This control strategy is then applied in the operation of the nuclear power simulation machine. The simulation result curves indicate that the steam generator water level maintains constant at the stable operation condition, and when the load changes, the water level changes but finally maintains the constant. (authors)

  4. Intuitive introductory statistics

    CERN Document Server

    Wolfe, Douglas A

    2017-01-01

    This textbook is designed to give an engaging introduction to statistics and the art of data analysis. The unique scope includes, but also goes beyond, classical methodology associated with the normal distribution. What if the normal model is not valid for a particular data set? This cutting-edge approach provides the alternatives. It is an introduction to the world and possibilities of statistics that uses exercises, computer analyses, and simulations throughout the core lessons. These elementary statistical methods are intuitive. Counting and ranking features prominently in the text. Nonparametric methods, for instance, are often based on counts and ranks and are very easy to integrate into an introductory course. The ease of computation with advanced calculators and statistical software, both of which factor into this text, allows important techniques to be introduced earlier in the study of statistics. This book's novel scope also includes measuring symmetry with Walsh averages, finding a nonp...

  5. Multispectral simulation environment for modeling low-light-level sensor systems

    Science.gov (United States)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low

  6. Statistical benchmark for BosonSampling

    International Nuclear Information System (INIS)

    Walschaers, Mattia; Mayer, Klaus; Buchleitner, Andreas; Kuipers, Jack; Urbina, Juan-Diego; Richter, Klaus; Tichy, Malte Christopher

    2016-01-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church–Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects. (fast track communication)

  7. Introduction to Statistics - eNotes

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Møller, Jan Kloppenborg; Andersen, Elisabeth Wreford

    2015-01-01

    Online textbook used in the introductory statistics courses at DTU. It provides a basic introduction to applied statistics for engineers. The necessary elements from probability theory are introduced (stochastic variable, density and distribution function, mean and variance, etc.) and thereafter...... the most basic statistical analysis methods are presented: Confidence band, hypothesis testing, simulation, simple and muliple regression, ANOVA and analysis of contingency tables. Examples with the software R are included for all presented theory and methods....

  8. A unified statistical framework for material decomposition using multienergy photon counting x-ray detectors

    International Nuclear Information System (INIS)

    Choi, Jiyoung; Kang, Dong-Goo; Kang, Sunghoon; Sung, Younghun; Ye, Jong Chul

    2013-01-01

    Purpose: Material decomposition using multienergy photon counting x-ray detectors (PCXD) has been an active research area over the past few years. Even with some success, the problem of optimal energy selection and three material decomposition including malignant tissue is still on going research topic, and more systematic studies are required. This paper aims to address this in a unified statistical framework in a mammographic environment.Methods: A unified statistical framework for energy level optimization and decomposition of three materials is proposed. In particular, an energy level optimization algorithm is derived using the theory of the minimum variance unbiased estimator, and an iterative algorithm is proposed for material composition as well as system parameter estimation under the unified statistical estimation framework. To verify the performance of the proposed algorithm, the authors performed simulation studies as well as real experiments using physical breast phantom and ex vivo breast specimen. Quantitative comparisons using various performance measures were conducted, and qualitative performance evaluations for ex vivo breast specimen were also performed by comparing the ground-truth malignant tissue areas identified by radiologists.Results: Both simulation and real experiments confirmed that the optimized energy bins by the proposed method allow better material decomposition quality. Moreover, for the specimen thickness estimation errors up to 2 mm, the proposed method provides good reconstruction results in both simulation and real ex vivo breast phantom experiments compared to existing methods.Conclusions: The proposed statistical framework of PCXD has been successfully applied for the energy optimization and decomposition of three material in a mammographic environment. Experimental results using the physical breast phantom and ex vivo specimen support the practicality of the proposed algorithm

  9. Bayesian Statistical Analysis of Historical and Late Holocene Rates of Sea-Level Change

    Science.gov (United States)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    A fundamental concern associated with climate change is the rate at which sea levels are rising. Studies of past sea level (particularly beyond the instrumental data range) allow modern sea-level rise to be placed in a more complete context. Considering this, we perform a Bayesian statistical analysis on historical and late Holocene rates of sea-level change. The data that form the input to the statistical model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. The aims are to estimate rates of sea-level rise, to determine when modern rates of sea-level rise began and to observe how these rates have been changing over time. Many of the current methods for doing this use simple linear regression to estimate rates. This is often inappropriate as it is too rigid and it can ignore uncertainties that arise as part of the data collection exercise. This can lead to over confidence in the sea-level trends being characterized. The proposed Bayesian model places a Gaussian process prior on the rate process (i.e. the process that determines how rates of sea-level are changing over time). The likelihood of the observed data is the integral of this process. When dealing with proxy reconstructions, this is set in an errors-in-variables framework so as to take account of age uncertainty. It is also necessary, in this case, for the model to account for glacio-isostatic adjustment, which introduces a covariance between individual age and sea-level observations. This method provides a flexible fit and it allows for the direct estimation of the rate process with full consideration of all sources of uncertainty. Analysis of tide-gauge datasets and proxy reconstructions in this way means that changing rates of sea level can be estimated more comprehensively and accurately than previously possible. The model captures the continuous and dynamic evolution of sea-level change and results show that not only are modern sea levels rising but that the rates

  10. Full counting statistics of level renormalization in electron transport through double quantum dots

    International Nuclear Information System (INIS)

    Luo Junyan; Shen Yu; Cen Gang; He Xiaoling; Wang Changrong; Jiao Hujun

    2011-01-01

    We examine the full counting statistics of electron transport through double quantum dots coupled in series, with particular attention being paid to the unique features originating from level renormalization. It is clearly illustrated that the energy renormalization gives rise to a dynamic charge blockade mechanism, which eventually results in super-Poissonian noise. Coupling of the double dots to an external heat bath leads to dephasing and relaxation mechanisms, which are demonstrated to suppress the noise in a unique way.

  11. Understanding statistical concepts using S-PLUS

    CERN Document Server

    Schumacker, Randall E

    2001-01-01

    Written as a supplemental text for an introductory or intermediate statistics course, this book is organized along the lines of many popular statistics texts. The chapters provide a good conceptual understanding of basic statistics and include exercises that use S-PLUS simulation programs. Each chapter lists a set of objectives and a summary.The book offers a rich insight into how probability has shaped statistical procedures in the behavioral sciences, as well as a brief history behind the creation of various statistics. Computational skills are kept to a minimum by including S-PLUS programs

  12. Enhanced Discrete-Time Scheduler Engine for MBMS E-UMTS System Level Simulator

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António

    2007-01-01

    In this paper the design of an E-UMTS system level simulator developed for the study of optimization methods for the MBMS is presented. The simulator uses a discrete event based philosophy, which captures the dynamic behavior of the Radio Network System. This dynamic behavior includes the user...... mobility, radio interfaces and the Radio Access Network. Its given emphasis on the enhancements developed for the simulator core, the Event Scheduler Engine. Two implementations for the Event Scheduler Engine are proposed, one optimized for single core processors and other for multi-core ones....

  13. Optimization of simulated moving bed (SMB) chromatography: a multi-level optimization procedure

    DEFF Research Database (Denmark)

    Jørgensen, Sten Bay; Lim, Young-il

    2004-01-01

    objective functions (productivity and desorbent consumption), employing the standing wave analysis, the true moving bed (TMB) model and the simulated moving bed (SMB) model. The procedure is constructed on a non-worse solution property advancing level by level and its solution does not mean a global optimum...

  14. Efficient Uplink Modeling for Dynamic System-Level Simulations of Cellular and Mobile Networks

    Directory of Open Access Journals (Sweden)

    Lobinger Andreas

    2010-01-01

    Full Text Available A novel theoretical framework for uplink simulations is proposed. It allows investigations which have to cover a very long (real- time and which at the same time require a certain level of accuracy in terms of radio resource management, quality of service, and mobility. This is of particular importance for simulations of self-organizing networks. For this purpose, conventional system level simulators are not suitable due to slow simulation speeds far beyond real-time. Simpler, snapshot-based tools are lacking the aforementioned accuracy. The runtime improvements are achieved by deriving abstract theoretical models for the MAC layer behavior. The focus in this work is long term evolution, and the most important uplink effects such as fluctuating interference, power control, power limitation, adaptive transmission bandwidth, and control channel limitations are considered. Limitations of the abstract models will be discussed as well. Exemplary results are given at the end to demonstrate the capability of the derived framework.

  15. Statistics without Tears: Complex Statistics with Simple Arithmetic

    Science.gov (United States)

    Smith, Brian

    2011-01-01

    One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…

  16. Level-one modules library for DSNP: Dynamic Simulator for Nuclear Power-plants

    International Nuclear Information System (INIS)

    Saphier, D.

    1978-09-01

    The Dynamic Simulator for Nuclear Power-plants (DSNP) is a system of programs and data sets by which a nuclear power plant or part thereof can be simulated at different levels of sophistication. The acronym DSNP is used interchangeably for the DSNP language, for the DSNP precompiler, for the DSNP libraries, and for the DSNP document generator. The DSNP language is a set of simple block oriented statements, which together with the appropriate data, comprise a simulation of a nuclear power plant. The majority of the DSNP statements will result in the inclusion of a simulated physical module into the program. FORTRAN statements can be inserted with no restrictions among DSNP statements

  17. Application of a modified conceptual rainfall-runoff model to simulation of groundwater level in an undefined watershed.

    Science.gov (United States)

    Hong, Nian; Hama, Takehide; Suenaga, Yuichi; Aqili, Sayed Waliullah; Huang, Xiaowu; Wei, Qiaoyan; Kawagoshi, Yasunori

    2016-01-15

    Groundwater level simulation models can help ensure the proper management and use of urban and rural water supply. In this paper, we propose a groundwater level tank model (GLTM) based on a conceptual rainfall-runoff model (tank model) to simulate fluctuations in groundwater level. The variables used in the simulations consist of daily rainfall and daily groundwater level, which were recorded between April 2011 and March 2015 at two representative observation wells in Kumamoto City, Japan. We determined the best-fit model parameters by root-mean-square error through use of the Shuffled Complex Evolution-University of Arizona algorithm on a simulated data set. Calibration and validation results were evaluated by their coefficients of determination, Nash-Sutcliffe efficiency coefficients, and root-mean-square error values. The GLTM provided accurate results in both the calibration and validation of fluctuations in groundwater level. The split sample test results indicate a good reliability. These results indicate that this model can provide a simple approach to the accurate simulation of groundwater levels. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  19. Statistical physics

    CERN Document Server

    Sadovskii, Michael V

    2012-01-01

    This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

  20. A constitutive model and numerical simulation of sintering processes at macroscopic level

    Science.gov (United States)

    Wawrzyk, Krzysztof; Kowalczyk, Piotr; Nosewicz, Szymon; Rojek, Jerzy

    2018-01-01

    This paper presents modelling of both single and double-phase powder sintering processes at the macroscopic level. In particular, its constitutive formulation, numerical implementation and numerical tests are described. The macroscopic constitutive model is based on the assumption that the sintered material is a continuous medium. The parameters of the constitutive model for material under sintering are determined by simulation of sintering at the microscopic level using a micro-scale model. Numerical tests were carried out for a cylindrical specimen under hydrostatic and uniaxial pressure. Results of macroscopic analysis are compared against the microscopic model results. Moreover, numerical simulations are validated by comparison with experimental results. The simulations and preparation of the model are carried out by Abaqus FEA - a software for finite element analysis and computer-aided engineering. A mechanical model is defined by the user procedure "Vumat" which is developed by the first author in Fortran programming language. Modelling presented in the paper can be used to optimize and to better understand the process.

  1. Simple statistical model for branched aggregates

    DEFF Research Database (Denmark)

    Lemarchand, Claire; Hansen, Jesper Schmidt

    2015-01-01

    , given that it already has bonds with others. The model is applied here to asphaltene nanoaggregates observed in molecular dynamics simulations of Cooee bitumen. The variation with temperature of the probabilities deduced from this model is discussed in terms of statistical mechanics arguments....... The relevance of the statistical model in the case of asphaltene nanoaggregates is checked by comparing the predicted value of the probability for one molecule to have exactly i bonds with the same probability directly measured in the molecular dynamics simulations. The agreement is satisfactory......We propose a statistical model that can reproduce the size distribution of any branched aggregate, including amylopectin, dendrimers, molecular clusters of monoalcohols, and asphaltene nanoaggregates. It is based on the conditional probability for one molecule to form a new bond with a molecule...

  2. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    Science.gov (United States)

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  3. A Simplified Algorithm for Statistical Investigation of Damage Spreading

    International Nuclear Information System (INIS)

    Gecow, Andrzej

    2009-01-01

    On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead of a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method--function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.

  4. The Impact of Subsampling on MODIS Level-3 Statistics of Cloud Optical Thickness and Effective Radius

    Science.gov (United States)

    Oreopoulos, Lazaros

    2004-01-01

    The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.

  5. A new equation of state Based on Nuclear Statistical Equilibrium for Core-Collapse Simulations

    Science.gov (United States)

    Furusawa, Shun; Yamada, Shoichi; Sumiyoshi, Kohsuke; Suzuki, Hideyuki

    2012-09-01

    We calculate a new equation of state for baryons at sub-nuclear densities for the use in core-collapse simulations of massive stars. The formulation is the nuclear statistical equilibrium description and the liquid drop approximation of nuclei. The model free energy to minimize is calculated by relativistic mean field theory for nucleons and the mass formula for nuclei with atomic number up to ~ 1000. We have also taken into account the pasta phase. We find that the free energy and other thermodynamical quantities are not very different from those given in the standard EOSs that adopt the single nucleus approximation. On the other hand, the average mass is systematically different, which may have an important effect on the rates of electron captures and coherent neutrino scatterings on nuclei in supernova cores.

  6. Radiation transport in statistically inhomogeneous rocks

    International Nuclear Information System (INIS)

    Lukhminskij, B.E.

    1975-01-01

    A study has been made of radiation transfer in statistically inhomogeneous rocks. Account has been taken of the statistical character of rock composition through randomization of density. Formulas are summarized for sigma-distribution, homogeneous density, the Simpson and Cauchy distributions. Consideration is given to the statistics of mean square ranges in a medium, simulated by the jump Markov random function. A quantitative criterion of rock heterogeneity is proposed

  7. Survey of statistical and sampling needs for environmental monitoring of commercial low-level radioactive waste disposal facilities

    International Nuclear Information System (INIS)

    Eberhardt, L.L.; Thomas, J.M.

    1986-07-01

    This project was designed to develop guidance for implementing 10 CFR Part 61 and to determine the overall needs for sampling and statistical work in characterizing, surveying, monitoring, and closing commercial low-level waste sites. When cost-effectiveness and statistical reliability are of prime importance, then double sampling, compositing, and stratification (with optimal allocation) are identified as key issues. If the principal concern is avoiding questionable statistical practice, then the applicability of kriging (for assessing spatial pattern), methods for routine monitoring, and use of standard textbook formulae in reporting monitoring results should be reevaluated. Other important issues identified include sampling for estimating model parameters and the use of data from left-censored (less than detectable limits) distributions

  8. Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance

    Science.gov (United States)

    Marquez, Jessica J.; Ramirez, Margarita

    2014-01-01

    A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation

  9. Statistics of dislocation pinning at localized obstacles

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, A. [S. N. Bose National Centre for Basic Sciences, Salt Lake, Kolkata 700098 (India); Bhattacharya, M., E-mail: mishreyee@vecc.gov.in; Barat, P. [Variable Energy Cyclotron Centre, 1/AF Bidhannagar, Kolkata 700064 (India)

    2014-10-14

    Pinning of dislocations at nanosized obstacles like precipitates, voids, and bubbles is a crucial mechanism in the context of phenomena like hardening and creep. The interaction between such an obstacle and a dislocation is often studied at fundamental level by means of analytical tools, atomistic simulations, and finite element methods. Nevertheless, the information extracted from such studies cannot be utilized to its maximum extent on account of insufficient information about the underlying statistics of this process comprising a large number of dislocations and obstacles in a system. Here, we propose a new statistical approach, where the statistics of pinning of dislocations by idealized spherical obstacles is explored by taking into account the generalized size-distribution of the obstacles along with the dislocation density within a three-dimensional framework. Starting with a minimal set of material parameters, the framework employs the method of geometrical statistics with a few simple assumptions compatible with the real physical scenario. The application of this approach, in combination with the knowledge of fundamental dislocation-obstacle interactions, has successfully been demonstrated for dislocation pinning at nanovoids in neutron irradiated type 316-stainless steel in regard to the non-conservative motion of dislocations. An interesting phenomenon of transition from rare pinning to multiple pinning regimes with increasing irradiation temperature is revealed.

  10. Constructivist and Behaviorist Approaches: Development and Initial Evaluation of a Teaching Practice Scale for Introductory Statistics at the College Level

    Directory of Open Access Journals (Sweden)

    Rossi A. Hassad

    2011-07-01

    Full Text Available This study examined the teaching practices of 227 college instructors of introductory statistics from the health and behavioral sciences. Using primarily multidimensional scaling (MDS techniques, a two-dimensional, 10-item teaching-practice scale, TISS (Teaching of Introductory Statistics Scale, was developed. The two dimensions (subscales are characterized as constructivist and behaviorist; they are orthogonal. Criterion validity of the TISS was established in relation to instructors’ attitude toward teaching, and acceptable levels of reliability were obtained. A significantly higher level of behaviorist practice (less reform-oriented was reported by instructors from the U.S., as well as instructors with academic degrees in mathematics and engineering, whereas those with membership in professional organizations, tended to be more reform-oriented (or constructivist. The TISS, thought to be the first of its kind, will allow the statistics education community to empirically assess and describe the pedagogical approach (teaching practice of instructors of introductory statistics in the health and behavioral sciences, at the college level, and determine what learning outcomes result from the different teaching-practice orientations. Further research is required in order to be conclusive about the structural and psychometric properties of this scale, including its stability over time.

  11. Muon Simulation at the Daya Bay SIte

    International Nuclear Information System (INIS)

    Mengyun, Guan; Jun, Cao; Changgen, Yang; Yaxuan, Sun; Luk, Kam-Biu

    2006-01-01

    With a pretty good-resolution mountain profile, we simulated the underground muon background at the Daya Bay site. To get the sea-level muon flux parameterization, a modification to the standard Gaisser's formula was introduced according to the world muon data. MUSIC code was used to transport muon through the mountain rock. To deploy the simulation, first we generate a statistic sample of sea-level muon events according to the sea-level muon flux distribution formula; then calculate the slant depth of muon passing through the mountain using an interpolation method based on the digitized data of the mountain; finally transport muons through rock to get underground muon sample, from which we can get results of muon flux, mean energy, energy distribution and angular distribution.

  12. Statistical gravitational waveform models: What to simulate next?

    Science.gov (United States)

    Doctor, Zoheyr; Farr, Ben; Holz, Daniel E.; Pürrer, Michael

    2017-12-01

    Models of gravitational waveforms play a critical role in detecting and characterizing the gravitational waves (GWs) from compact binary coalescences. Waveforms from numerical relativity (NR), while highly accurate, are too computationally expensive to produce to be directly used with Bayesian parameter estimation tools like Markov-chain-Monte-Carlo and nested sampling. We propose a Gaussian process regression (GPR) method to generate reduced-order-model waveforms based only on existing accurate (e.g. NR) simulations. Using a training set of simulated waveforms, our GPR approach produces interpolated waveforms along with uncertainties across the parameter space. As a proof of concept, we use a training set of IMRPhenomD waveforms to build a GPR model in the 2-d parameter space of mass ratio q and equal-and-aligned spin χ1=χ2. Using a regular, equally-spaced grid of 120 IMRPhenomD training waveforms in q ∈[1 ,3 ] and χ1∈[-0.5 ,0.5 ], the GPR mean approximates IMRPhenomD in this space to mismatches below 4.3 ×10-5. Our approach could in principle use training waveforms directly from numerical relativity. Beyond interpolation of waveforms, we also present a greedy algorithm that utilizes the errors provided by our GPR model to optimize the placement of future simulations. In a fiducial test case we find that using the greedy algorithm to iteratively add simulations achieves GPR errors that are ˜1 order of magnitude lower than the errors from using Latin-hypercube or square training grids.

  13. Representative volume size: A comparison of statistical continuum mechanics and statistical physics

    Energy Technology Data Exchange (ETDEWEB)

    AIDUN,JOHN B.; TRUCANO,TIMOTHY G.; LO,CHI S.; FYE,RICHARD M.

    1999-05-01

    In this combination background and position paper, the authors argue that careful work is needed to develop accurate methods for relating the results of fine-scale numerical simulations of material processes to meaningful values of macroscopic properties for use in constitutive models suitable for finite element solid mechanics simulations. To provide a definite context for this discussion, the problem is couched in terms of the lack of general objective criteria for identifying the size of the representative volume (RV) of a material. The objective of this report is to lay out at least the beginnings of an approach for applying results and methods from statistical physics to develop concepts and tools necessary for determining the RV size, as well as alternatives to RV volume-averaging for situations in which the RV is unmanageably large. The background necessary to understand the pertinent issues and statistical physics concepts is presented.

  14. Statistics for X-chromosome associations.

    Science.gov (United States)

    Özbek, Umut; Lin, Hui-Min; Lin, Yan; Weeks, Daniel E; Chen, Wei; Shaffer, John R; Purcell, Shaun M; Feingold, Eleanor

    2018-06-13

    In a genome-wide association study (GWAS), association between genotype and phenotype at autosomal loci is generally tested by regression models. However, X-chromosome data are often excluded from published analyses of autosomes because of the difference between males and females in number of X chromosomes. Failure to analyze X-chromosome data at all is obviously less than ideal, and can lead to missed discoveries. Even when X-chromosome data are included, they are often analyzed with suboptimal statistics. Several mathematically sensible statistics for X-chromosome association have been proposed. The optimality of these statistics, however, is based on very specific simple genetic models. In addition, while previous simulation studies of these statistics have been informative, they have focused on single-marker tests and have not considered the types of error that occur even under the null hypothesis when the entire X chromosome is scanned. In this study, we comprehensively tested several X-chromosome association statistics using simulation studies that include the entire chromosome. We also considered a wide range of trait models for sex differences and phenotypic effects of X inactivation. We found that models that do not incorporate a sex effect can have large type I error in some cases. We also found that many of the best statistics perform well even when there are modest deviations, such as trait variance differences between the sexes or small sex differences in allele frequencies, from assumptions. © 2018 WILEY PERIODICALS, INC.

  15. Modeling and simulation of the agricultural sprayer boom leveling system

    KAUST Repository

    Sun, Jian

    2011-01-01

    According to the agricultural precision requirements, the distance from sprayer nozzles to the corps should be kept between 50 cm to 70 cm. The sprayer boom also needs to be kept parallel to the field during the application process. Thus we can guarantee the quality of the chemical droplets distribution on the crops. In this paper we design a sprayer boom leveling system for agricultural sprayer vehicles combined with a four-rod linkage self-leveling suspension and electro-hydraulic auto-leveling system. The dynamic analysis shows that the suspension can realize an excellent self-leveling in a comparative small inclination range. In addition we build compensation controller for the electro-hydraulic system based on the mathematical model. With simulations we can optimize the performance of this controller to make sure a fast leveling response to the inclined sprayer boom. © 2011 IEEE.

  16. Simulation and Validation of the ATLAS Level-1 Topological Trigger

    CERN Document Server

    Bakker, Pepijn Johannes; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment has recently commissioned a new component of its first-level trigger: the L1 topological trigger. This system, using state-of-the-art FPGA processors, makes it possible to reject events by applying topological requirements, such as kinematic criteria involving clusters, jets, muons, and total transverse energy. The data recorded using the L1Topological trigger demonstrates that this innovative trigger strategy allows for an improved rejection rate without efficiency loss. This improvement has been shown for several relevant physics processes leading to low-$p_T$ leptons, including $H\\to{}\\tau{}\\tau{}$ and $J/\\Psi\\to{}\\mu{}\\mu{}$. In addition, an accurate simulation of the L1Topological trigger is used to validate and optimize the performance of this trigger. To reach such an accuracy, this simulation must take into account the fact that the firmware algorithms are executed on a FPGA architecture, while the simulation is executed on a floating point architecture.

  17. Simulating Radionuclide Migrations of Low-level Wastes in Nearshore Environment

    Science.gov (United States)

    Lu, C. C.; Li, M. H.; Chen, J. S.; Yeh, G. T.

    2016-12-01

    Tunnel disposal into nearshore mountains was tentatively selected as one of final disposal sites for low-level wastes in Taiwan. Safety assessment on radionuclide migrations in far-filed may involve geosphere processes under coastal environments and into nearshore ocean. In this study the 3-D HYDROFEOCHE5.6 numerical model was used to perform simulations of groundwater flow and radionuclide transport with decay chains. Domain of interest on the surface includes nearby watersheds delineated by digital elevation models and nearshore seabed. As deep as 800 m below the surface and 400 m below sea bed were considered for simulations. The disposal site was located at 200m below the surface. Release rates of radionuclides from near-field was estimated by analytical solutions of radionuclide diffusion with decay out of engineered barriers. Far-field safety assessments were performed starting from the release of radionuclides out of engineered barriers to a time scale of 10,000 years. Sensitivity analyses of geosphere and transport parameters were performed to improve our understanding of safety on final disposal of low-level waste in nearshore environments.

  18. Computer simulation of nonequilibrium processes

    International Nuclear Information System (INIS)

    Wallace, D.C.

    1985-07-01

    The underlying concepts of nonequilibrium statistical mechanics, and of irreversible thermodynamics, will be described. The question at hand is then, how are these concepts to be realize in computer simulations of many-particle systems. The answer will be given for dissipative deformation processes in solids, on three hierarchical levels: heterogeneous plastic flow, dislocation dynamics, an molecular dynamics. Aplication to the shock process will be discussed

  19. Direct Numerical Simulations of Statistically Stationary Turbulent Premixed Flames

    KAUST Repository

    Im, Hong G.

    2016-07-15

    Direct numerical simulations (DNS) of turbulent combustion have evolved tremendously in the past decades, thanks to the rapid advances in high performance computing technology. Today’s DNS is capable of incorporating detailed reaction mechanisms and transport properties of hydrocarbon fuels, with physical parameter ranges approaching laboratory scale flames, thereby allowing direct comparison and cross-validation against laser diagnostic measurements. While these developments have led to significantly improved understanding of fundamental turbulent flame characteristics, there are increasing demands to explore combustion regimes at higher levels of turbulent Reynolds (Re) and Karlovitz (Ka) numbers, with a practical interest in new combustion engines driving towards higher efficiencies and lower emissions. The article attempts to provide a brief overview of the state-of-the-art DNS of turbulent premixed flames at high Re/Ka conditions, with an emphasis on homogeneous and isotropic turbulent flow configurations. Some important qualitative findings from numerical studies are summarized, new analytical approaches to investigate intensely turbulent premixed flame dynamics are discussed, and topics for future research are suggested. © 2016 Taylor & Francis.

  20. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  1. Statistical crack mechanics

    International Nuclear Information System (INIS)

    Dienes, J.K.

    1993-01-01

    Although it is possible to simulate the ground blast from a single explosive shot with a simple computer algorithm and appropriate constants, the most commonly used modelling methods do not account for major changes in geology or shot energy because mechanical features such as tectonic stresses, fault structure, microcracking, brittle-ductile transition, and water content are not represented in significant detail. An alternative approach for modelling called Statistical Crack Mechanics is presented in this paper. This method, developed in the seventies as a part of the oil shale program, accounts for crack opening, shear, growth, and coalescence. Numerous photographs and micrographs show that shocked materials tend to involve arrays of planar cracks. The approach described here provides a way to account for microstructure and give a representation of the physical behavior of a material at the microscopic level that can account for phenomena such as permeability, fragmentation, shear banding, and hot-spot formation in explosives

  2. ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    C. Li

    2012-07-01

    Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  3. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    Science.gov (United States)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  4. Simulation and Analysis of a Grid Connected Multi-level Converter Topologies and their Comparison

    Directory of Open Access Journals (Sweden)

    Mohammad Shadab Mirza

    2014-09-01

    Full Text Available This paper presents simulation and analysis of a grid connected multi-level converter topologies. In this paper, converter circuit works as an inverter by controlling the switching angle (α. This paper, presents a MATLAB/SIMULINK model of multi-level converter topologies (topology1 & topology2. Topology1 is without transformer while topology2 with transformer. Both the topologies are simulated and analyzed for three level converters in order to reduce the total harmonic distortion (THD. A comparative study of topology1 and topology2 is also presented in this paper for different switching angles (α and battery voltages. The results have been tabulated and discussed.

  5. VA PTSD Statistics

    Data.gov (United States)

    Department of Veterans Affairs — National-level, VISN-level, and/or VAMC-level statistics on the numbers and percentages of users of VHA care form the Northeast Program Evaluation Center (NEPEC)....

  6. Computational Enhancements for Direct Numerical Simulations of Statistically Stationary Turbulent Premixed Flames

    KAUST Repository

    Mukhadiyev, Nurzhan

    2017-05-01

    Combustion at extreme conditions, such as a turbulent flame at high Karlovitz and Reynolds numbers, is still a vast and an uncertain field for researchers. Direct numerical simulation of a turbulent flame is a superior tool to unravel detailed information that is not accessible to most sophisticated state-of-the-art experiments. However, the computational cost of such simulations remains a challenge even for modern supercomputers, as the physical size, the level of turbulence intensity, and chemical complexities of the problems continue to increase. As a result, there is a strong demand for computational cost reduction methods as well as in acceleration of existing methods. The main scope of this work was the development of computational and numerical tools for high-fidelity direct numerical simulations of premixed planar flames interacting with turbulence. The first part of this work was KAUST Adaptive Reacting Flow Solver (KARFS) development. KARFS is a high order compressible reacting flow solver using detailed chemical kinetics mechanism; it is capable to run on various types of heterogeneous computational architectures. In this work, it was shown that KARFS is capable of running efficiently on both CPU and GPU. The second part of this work was numerical tools for direct numerical simulations of planar premixed flames: such as linear turbulence forcing and dynamic inlet control. DNS of premixed turbulent flames conducted previously injected velocity fluctuations at an inlet. Turbulence injected at the inlet decayed significantly while reaching the flame, which created a necessity to inject higher than needed fluctuations. A solution for this issue was to maintain turbulence strength on the way to the flame using turbulence forcing. Therefore, a linear turbulence forcing was implemented into KARFS to enhance turbulence intensity. Linear turbulence forcing developed previously by other groups was corrected with net added momentum removal mechanism to prevent mean

  7. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  8. On-the-fly confluence detection for statistical model checking (extended version)

    NARCIS (Netherlands)

    Hartmanns, Arnd; Timmer, Mark

    Statistical model checking is an analysis method that circumvents the state space explosion problem in model-based verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulation-based technique, it can only provide sound results if the

  9. Development of capacitive sensor for automatically measuring tumbler water level with FEA simulation.

    Science.gov (United States)

    Wei, Qun; Kim, Mi-Jung; Lee, Jong-Ha

    2018-01-01

    Drinking water has several advantages that have already been established, such as improving blood circulation, reducing acid in the stomach, etc. However, due to people not noticing the amount of water they consume every time they drink, most people drink less water than the recommended daily allowance. In this paper, a capacitive sensor for developing an automatic tumbler to measure water level is proposed. Different than in previous studies, the proposed capacitive sensor was separated into two sets: the main sensor for measuring the water level in the tumbler, and the reference sensor for measuring the incremental level unit. In order to confirm the feasibility of the proposed idea, and to optimize the shape of the sensor, a 3D model of the capacitive sensor with the tumbler was designed and subjected to Finite Element Analysis (FEA) simulation. According to the simulation results, the electrodes were made of copper and assembled in a tumbler manufactured by a 3D printer. The tumbler was filled with water and was subjected to experiments in order to assess the sensor's performance. The comparison of experimental results to the simulation results shows that the measured capacitance value of the capacitive sensor changed linearly as the water level varied. This proves that the proposed sensor can accurately measure the water level in the tumbler. Additionally, by use of the curve fitting method, a compensation algorithm was found to match the actual level with the measured level. The experimental results proved that the proposed capacitive sensor is able to measure the actual water level in the tumbler accurately. A digital control part with micro-processor will be designed and fixed on the bottom of the tumbler for developing a smart tumbler.

  10. Statistics of Local Extremes

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Bierbooms, W.; Hansen, Kurt Schaldemose

    2003-01-01

    . A theoretical expression for the probability density function associated with local extremes of a stochasticprocess is presented. The expression is basically based on the lower four statistical moments and a bandwidth parameter. The theoretical expression is subsequently verified by comparison with simulated...

  11. A goodness of fit statistic for the geometric distribution

    NARCIS (Netherlands)

    J.A. Ferreira

    2003-01-01

    textabstractWe propose a goodness of fit statistic for the geometric distribution and compare it in terms of power, via simulation, with the chi-square statistic. The statistic is based on the Lau-Rao theorem and can be seen as a discrete analogue of the total time on test statistic. The results

  12. Flipped Learning With Simulation in Undergraduate Nursing Education.

    Science.gov (United States)

    Kim, HeaRan; Jang, YounKyoung

    2017-06-01

    Flipped learning has proliferated in various educational environments. This study aimed to verify the effects of flipped learning on the academic achievement, teamwork skills, and satisfaction levels of undergraduate nursing students. For the flipped learning group, simulation-based education via the flipped learning method was provided, whereas traditional, simulation-based education was provided for the control group. After completion of the program, academic achievement, teamwork skills, and satisfaction levels were assessed and analyzed. The flipped learning group received higher scores on academic achievement, teamwork skills, and satisfaction levels than the control group, including the areas of content knowledge and clinical nursing practice competency. In addition, this difference gradually increased between the two groups throughout the trial. The results of this study demonstrated the positive, statistically significant effects of the flipped learning method on simulation-based nursing education. [J Nurs Educ. 2017;56(6):329-336.]. Copyright 2017, SLACK Incorporated.

  13. Multi-Level Simulated Fault Injection for Data Dependent Reliability Analysis of RTL Circuit Descriptions

    Directory of Open Access Journals (Sweden)

    NIMARA, S.

    2016-02-01

    Full Text Available This paper proposes data-dependent reliability evaluation methodology for digital systems described at Register Transfer Level (RTL. It uses a hybrid hierarchical approach, combining the accuracy provided by Gate Level (GL Simulated Fault Injection (SFI and the low simulation overhead required by RTL fault injection. The methodology comprises the following steps: the correct simulation of the RTL system, according to a set of input vectors, hierarchical decomposition of the system into basic RTL blocks, logic synthesis of basic RTL blocks, data-dependent SFI for the GL netlists, and RTL SFI. The proposed methodology has been validated in terms of accuracy on a medium sized circuit – the parallel comparator used in Check Node Unit (CNU of the Low-Density Parity-Check (LDPC decoders. The methodology has been applied for the reliability analysis of a 128-bit Advanced Encryption Standard (AES crypto-core, for which the GL simulation was prohibitive in terms of required computational resources.

  14. MULTI-LEVEL SAMPLING APPROACH FOR CONTINOUS LOSS DETECTION USING ITERATIVE WINDOW AND STATISTICAL MODEL

    OpenAIRE

    Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani

    2010-01-01

    This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...

  15. Laboratory simulation of high-level liquid waste evaporation and storage

    International Nuclear Information System (INIS)

    Anderson, P.A.

    1978-01-01

    The reprocessing of nuclear fuel generates high-level liquid wastes (HLLW) which require interim storage pending solidification. Interim storage facilities are most efficient if the HLLW is evaporated prior to or during the storage period. Laboratory evaporation and storage studies with simulated waste slurries have yielded data which are applicable to the efficient design and economical operation of actual process equipment

  16. Process informed accurate compact modelling of 14-nm FinFET variability and application to statistical 6T-SRAM simulations

    OpenAIRE

    Wang, Xingsheng; Reid, Dave; Wang, Liping; Millar, Campbell; Burenkov, Alex; Evanschitzky, Peter; Baer, Eberhard; Lorenz, Juergen; Asenov, Asen

    2016-01-01

    This paper presents a TCAD based design technology co-optimization (DTCO) process for 14nm SOI FinFET based SRAM, which employs an enhanced variability aware compact modeling approach that fully takes process and lithography simulations and their impact on 6T-SRAM layout into account. Realistic double patterned gates and fins and their impacts are taken into account in the development of the variability-aware compact model. Finally, global process induced variability and local statistical var...

  17. Efficient Simulation Modeling of an Integrated High-Level-Waste Processing Complex

    International Nuclear Information System (INIS)

    Gregory, Michael V.; Paul, Pran K.

    2000-01-01

    An integrated computational tool named the Production Planning Model (ProdMod) has been developed to simulate the operation of the entire high-level-waste complex (HLW) at the Savannah River Site (SRS) over its full life cycle. ProdMod is used to guide SRS management in operating the waste complex in an economically efficient and environmentally sound manner. SRS HLW operations are modeled using coupled algebraic equations. The dynamic nature of plant processes is modeled in the form of a linear construct in which the time dependence is implicit. Batch processes are modeled in discrete event-space, while continuous processes are modeled in time-space. The ProdMod methodology maps between event-space and time-space such that the inherent mathematical discontinuities in batch process simulation are avoided without sacrificing any of the necessary detail in the batch recipe steps. Modeling the processes separately in event- and time-space using linear constructs, and then coupling the two spaces, has accelerated the speed of simulation compared to a typical dynamic simulation. The ProdMod simulator models have been validated against operating data and other computer codes. Case studies have demonstrated the usefulness of the ProdMod simulator in developing strategies that demonstrate significant cost savings in operating the SRS HLW complex and in verifying the feasibility of newly proposed processes

  18. Simulation of Groundwater-Level and Salinity Changes in the Eastern Shore, Virginia

    Science.gov (United States)

    Sanford, Ward E.; Pope, Jason P.; Nelms, David L.

    2009-01-01

    Groundwater-level and salinity changes have been simulated with a groundwater model developed and calibrated for the Eastern Shore of Virginia. The Eastern Shore is the southern part of the Delmarva Peninsula that is occupied by Accomack and Northampton Counties in Virginia. Groundwater is the sole source of freshwater to the Eastern Shore, and demands for water have been increasing from domestic, industrial, agricultural, and public-supply sectors of the economy. Thus, it is important that the groundwater supply be protected from overextraction and seawater intrusion. The best way for water managers to use all of the information available is usually to compile this information into a numerical model that can simulate the response of the system to current and future stresses. A detailed description of the geology, hydrogeology, and historical groundwater extractions was compiled and entered into the numerical model. The hydrogeologic framework is composed of a surficial aquifer under unconfined conditions, a set of three aquifers and associated overlying confining units under confined conditions (the upper, middle, and lower Yorktown-Eastover Formation), and an underlying confining unit (the St. Marys Formation). An estimate of the location and depths of two major paleochannels was also included in the framework of the model. Total withdrawals from industrial, commercial, public-supply, and some agricultural wells were compiled from the period 1900 through 2003. Reported pumpage from these sources increased dramatically during the 1960s and 70s, up to currently about 4 million gallons per day. Domestic withdrawals were estimated on the basis of population census districts and were assigned spatially to the model on the assumption that domestic users are located close to roads. A numerical model was created using the U.S. Geological Survey (USGS) code SEAWAT to simulate both water levels and concentrations of chloride (representing salinity). The model was

  19. Business Statistics Education: Content and Software in Undergraduate Business Statistics Courses.

    Science.gov (United States)

    Tabatabai, Manouchehr; Gamble, Ralph

    1997-01-01

    Survey responses from 204 of 500 business schools identified most often topics in business statistics I and II courses. The most popular software at both levels was Minitab. Most schools required both statistics I and II. (SK)

  20. Ajustement statistique des simulations climatiques : l'exemple des précipitations saisonnières de l'Amérique tropicaleStatistical adjustment of simulated climate: example of seasonal rainfall of tropical America.

    Science.gov (United States)

    Moron, Vincent; Navarra, Antonio

    2000-05-01

    This study presents the skill of the seasonal rainfall of tropical America from an ensemble of three 34-year general circulation model (ECHAM 4) simulations forced with observed sea surface temperature between 1961 and 1994. The skill gives a first idea of the amount of potential predictability if the sea surface temperatures are perfectly known some time in advance. We use statistical post-processing based on the leading modes (extracted from Singular Value Decomposition of the covariance matrix between observed and simulated rainfall fields) to improve the raw skill obtained by simple comparison between observations and simulations. It is shown that 36-55 % of the observed seasonal variability is explained by the simulations on a regional basis. Skill is greatest for Brazilian Nordeste (March-May), but also for northern South America or the Caribbean basin in June-September or northern Amazonia in September-November for example.

  1. Design Techniques and Reservoir Simulation

    Directory of Open Access Journals (Sweden)

    Ahad Fereidooni

    2012-11-01

    Full Text Available Enhanced oil recovery using nitrogen injection is a commonly applied method for pressure maintenance in conventional reservoirs. Numerical simulations can be practiced for the prediction of a reservoir performance in the course of injection process; however, a detailed simulation might take up enormous computer processing time. In such cases, a simple statistical model may be a good approach to the preliminary prediction of the process without any application of numerical simulation. In the current work, seven rock/fluid reservoir properties are considered as screening parameters and those parameters having the most considerable effect on the process are determined using the combination of experimental design techniques and reservoir simulations. Therefore, the statistical significance of the main effects and interactions of screening parameters are analyzed utilizing statistical inference approaches. Finally, the influential parameters are employed to create a simple statistical model which allows the preliminary prediction of nitrogen injection in terms of a recovery factor without resorting to numerical simulations.

  2. Using Direct Sub-Level Entity Access to Improve Nuclear Stockpile Simulation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Robert Y. [Brigham Young Univ., Provo, UT (United States)

    1999-08-01

    Direct sub-level entity access is a seldom-used technique in discrete-event simulation modeling that addresses the accessibility of sub-level entity information. The technique has significant advantages over more common, alternative modeling methods--especially where hierarchical entity structures are modeled. As such, direct sub-level entity access is often preferable in modeling nuclear stockpile, life-extension issues, an area to which it has not been previously applied. Current nuclear stockpile, life-extension models were demonstrated to benefit greatly from the advantages of direct sub-level entity access. In specific cases, the application of the technique resulted in models that were up to 10 times faster than functionally equivalent models where alternative techniques were applied. Furthermore, specific implementations of direct sub-level entity access were observed to be more flexible, efficient, functional, and scalable than corresponding implementations using common modeling techniques. Common modeling techniques (''unbatch/batch'' and ''attribute-copying'') proved inefficient and cumbersome in handling many nuclear stockpile modeling complexities, including multiple weapon sites, true defect analysis, and large numbers of weapon and subsystem types. While significant effort was required to enable direct sub-level entity access in the nuclear stockpile simulation models, the enhancements were worth the effort--resulting in more efficient, more capable, and more informative models that effectively addressed the complexities of the nuclear stockpile.

  3. Morphological representation of order-statistics filters.

    Science.gov (United States)

    Charif-Chefchaouni, M; Schonfeld, D

    1995-01-01

    We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.

  4. Likert scales, levels of measurement and the "laws" of statistics.

    Science.gov (United States)

    Norman, Geoff

    2010-12-01

    Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for "getting the wrong answer".

  5. Conducting Simulation Studies in the R Programming Environment.

    Science.gov (United States)

    Hallgren, Kevin A

    2013-10-12

    Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtaining accurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted to researchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulation studies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a) the use of simulations to answer a novel question about statistical analysis, (b) the use of simulations to estimate statistical power, and (c) the use of simulations to obtain confidence intervals of parameter estimates through bootstrapping. Results and fully annotated syntax from these examples are provided.

  6. A formal statistical approach to representing uncertainty in rainfall-runoff modelling with focus on residual analysis and probabilistic output evaluation - Distinguishing simulation and prediction

    DEFF Research Database (Denmark)

    Breinholt, Anders; Møller, Jan Kloppenborg; Madsen, Henrik

    2012-01-01

    While there seems to be consensus that hydrological model outputs should be accompanied with an uncertainty estimate the appropriate method for uncertainty estimation is not agreed upon and a debate is ongoing between advocators of formal statistical methods who consider errors as stochastic...... and GLUE advocators who consider errors as epistemic, arguing that the basis of formal statistical approaches that requires the residuals to be stationary and conform to a statistical distribution is unrealistic. In this paper we take a formal frequentist approach to parameter estimation and uncertainty...... necessary but the statistical assumptions were nevertheless not 100% justified. The residual analysis showed that significant autocorrelation was present for all simulation models. We believe users of formal approaches to uncertainty evaluation within hydrology and within environmental modelling in general...

  7. Statistical aspects of nuclear structure

    International Nuclear Information System (INIS)

    Parikh, J.C.

    1977-01-01

    The statistical properties of energy levels and a statistical approach to transition strengths are discussed in relation to nuclear structure studies at high excitation energies. It is shown that the calculations can be extended to the ground state domain also. The discussion is based on the study of random matrix theory of level density and level spacings, using the Gaussian Orthogonal Ensemble (GOE) concept. The short range and long range correlations are also studied statistically. The polynomial expansion method is used to obtain excitation strengths. (A.K.)

  8. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  9. CutL: an alternative to Kulldorff's scan statistics for cluster detection with a specified cut-off level.

    Science.gov (United States)

    Więckowska, Barbara; Marcinkowska, Justyna

    2017-11-06

    When searching for epidemiological clusters, an important tool can be to carry out one's own research with the incidence rate from the literature as the reference level. Values exceeding this level may indicate the presence of a cluster in that location. This paper presents a method of searching for clusters that have significantly higher incidence rates than those specified by the investigator. The proposed method uses the classic binomial exact test for one proportion and an algorithm that joins areas with potential clusters while reducing the number of multiple comparisons needed. The sensitivity and specificity are preserved by this new method, while avoiding the Monte Carlo approach and still delivering results comparable to the commonly used Kulldorff's scan statistics and other similar methods of localising clusters. A strong contributing factor afforded by the statistical software that makes this possible is that it allows analysis and presentation of the results cartographically.

  10. High performance cellular level agent-based simulation with FLAME for the GPU.

    Science.gov (United States)

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  11. Lagged Associations of Metropolitan Statistical Area- and State-Level Income Inequality with Cognitive Function: The Health and Retirement Study.

    Science.gov (United States)

    Kim, Daniel; Griffin, Beth Ann; Kabeto, Mohammed; Escarce, José; Langa, Kenneth M; Shih, Regina A

    2016-01-01

    Much variation in individual-level cognitive function in late life remains unexplained, with little exploration of area-level/contextual factors to date. Income inequality is a contextual factor that may plausibly influence cognitive function. In a nationally-representative cohort of older Americans from the Health and Retirement Study, we examined state- and metropolitan statistical area (MSA)-level income inequality as predictors of individual-level cognitive function measured by the 27-point Telephone Interview for Cognitive Status (TICS-m) scale. We modeled latency periods of 8-20 years, and controlled for state-/metropolitan statistical area (MSA)-level and individual-level factors. Higher MSA-level income inequality predicted lower cognitive function 16-18 years later. Using a 16-year lag, living in a MSA in the highest income inequality quartile predicted a 0.9-point lower TICS-m score (β = -0.86; 95% CI = -1.41, -0.31), roughly equivalent to the magnitude associated with five years of aging. We observed no associations for state-level income inequality. The findings were robust to sensitivity analyses using propensity score methods. Among older Americans, MSA-level income inequality appears to influence cognitive function nearly two decades later. Policies reducing income inequality levels within cities may help address the growing burden of declining cognitive function among older populations within the United States.

  12. Statistical assessment of the 137Cs levels of the Chernihiv oblast's milk

    International Nuclear Information System (INIS)

    Lev, T.D.; Zakhutska, O.M.

    2004-01-01

    The article deals with research directed on overcoming the consequences of the Chornobyl accident at the territory of Ukraine. Results are considered of the use of the long-normal distribution law to evaluate results of 137Cs milk contamination. Critical farms of Chernihiv oblast, where agreeing criteria for assessing the primary data on milk contamination were applied, became the object of the study. An algorithm was applied to calculate factual and forecast repetitions of gradations according to the stages of statistical processing of milk samples contaminated with 137Cs. Results of the milk contamination analysis at a later stage (1991-2001)are described by the long-normal distribution law which can be used to forecast for the subsequent years. The maximum repetability of the gradations of the contaminated milk (from 10 to 40 Bq/l) is determined for factual and forecast frequencies of the levels of contamination thereof. The results of the study are proposed to be used while taking measures directed on diminishing the levels of contamination of agricultural products with 137Cs

  13. Conducting Simulation Studies in the R Programming Environment

    Directory of Open Access Journals (Sweden)

    Kevin A. Hallgren

    2013-10-01

    Full Text Available Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtainingaccurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted toresearchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulationstudies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a the use of simulations to answer a novel question about statistical analysis, (b the use of simulations to estimate statistical power, and (c the use of simulations to obtain confidence intervals of parameter estimates throughbootstrapping. Results and fully annotated syntax from these examples are provided.

  14. Low dose CT simulation using experimental noise model

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Satori; Zamyatin, Alexander A. [Toshiba Medical Systems Corporation, Tochigi, Otawarashi (Japan); Silver, Michael D. [Toshiba Medical Research Institute, Vernon Hills, IL (United States)

    2011-07-01

    We suggest a method to obtain system noise model experimentally without relying on assumptions on statistical distribution of the noise; also, knowledge of DAS gain and electronic noise level are not required. Evaluation with ultra-low dose CT data (5 mAs) shows good match between simulated and real data noise. (orig.)

  15. The new ATLAS Fast Calorimeter Simulation

    CERN Document Server

    Hasib, Ahmed; The ATLAS collaboration

    2017-01-01

    Producing the very large samples of simulated events required by many physics and performance studies with the ATLAS detector using the full GEANT4 detector simulation is highly CPU intensive. Fast simulation tools are a useful way of reducing CPU requirements when detailed detector simulations are not needed. During the LHC Run-1, a fast calorimeter simulation (FastCaloSim) was successfully used in ATLAS. FastCaloSim provides a simulation of the particle energy response at the calorimeter read-out cell level, taking into account the detailed particle shower shapes and the correlations between the energy depositions in the various calorimeter layers. It is interfaced to the standard ATLAS digitization and reconstruction software, and it can be tuned to data more easily than GEANT4. Now an improved version of FastCaloSim is in development, incorporating the experience with the version used during Run-1. The new FastCaloSim makes use of statistical techniques such as principal component analysis, and a neural n...

  16. A simulation study comparing aberration detection algorithms for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Painter Ian

    2007-03-01

    Full Text Available Abstract Background The usefulness of syndromic surveillance for early outbreak detection depends in part on effective statistical aberration detection. However, few published studies have compared different detection algorithms on identical data. In the largest simulation study conducted to date, we compared the performance of six aberration detection algorithms on simulated outbreaks superimposed on authentic syndromic surveillance data. Methods We compared three control-chart-based statistics, two exponential weighted moving averages, and a generalized linear model. We simulated 310 unique outbreak signals, and added these to actual daily counts of four syndromes monitored by Public Health – Seattle and King County's syndromic surveillance system. We compared the sensitivity of the six algorithms at detecting these simulated outbreaks at a fixed alert rate of 0.01. Results Stratified by baseline or by outbreak distribution, duration, or size, the generalized linear model was more sensitive than the other algorithms and detected 54% (95% CI = 52%–56% of the simulated epidemics when run at an alert rate of 0.01. However, all of the algorithms had poor sensitivity, particularly for outbreaks that did not begin with a surge of cases. Conclusion When tested on county-level data aggregated across age groups, these algorithms often did not perform well in detecting signals other than large, rapid increases in case counts relative to baseline levels.

  17. Statistical methods and their applications in constructional engineering

    International Nuclear Information System (INIS)

    1977-01-01

    An introduction into the basic terms of statistics is followed by a discussion of elements of the probability theory, customary discrete and continuous distributions, simulation methods, statistical supporting framework dynamics, and a cost-benefit analysis of the methods introduced. (RW) [de

  18. Statistical Engineering in Air Traffic Management Research

    Science.gov (United States)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  19. A goodness of fit statistic for the geometric distribution

    OpenAIRE

    Ferreira, J.A.

    2003-01-01

    textabstractWe propose a goodness of fit statistic for the geometric distribution and compare it in terms of power, via simulation, with the chi-square statistic. The statistic is based on the Lau-Rao theorem and can be seen as a discrete analogue of the total time on test statistic. The results suggest that the test based on the new statistic is generally superior to the chi-square test.

  20. Sound statistical model checking for MDP using partial order and confluence reduction

    NARCIS (Netherlands)

    Hartmanns, Arnd; Timmer, Mark

    Statistical model checking (SMC) is an analysis method that circumvents the state space explosion problem in model-based verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulation-based technique, it can in general only provide sound

  1. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Directory of Open Access Journals (Sweden)

    Anita Lindmark

    Full Text Available When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance.The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method.Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252 and high specificity (0.991. There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence.The study emphasizes the importance of combining clinical relevance and level of statistical

  2. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Science.gov (United States)

    Lindmark, Anita; van Rompaye, Bart; Goetghebeur, Els; Glader, Eva-Lotta; Eriksson, Marie

    2016-01-01

    When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke) to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance. The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method. Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252) and high specificity (0.991). There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence. The study emphasizes the importance of combining clinical relevance and level of statistical confidence when

  3. mapDIA: Preprocessing and statistical analysis of quantitative proteomics data from data independent acquisition mass spectrometry.

    Science.gov (United States)

    Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon

    2015-11-03

    Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Topology for Statistical Modeling of Petascale Data

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Levine, Joshua [Univ. of Utah, Salt Lake City, UT (United States); Gyulassy, Attila [Univ. of Utah, Salt Lake City, UT (United States); Bremer, P. -T. [Univ. of Utah, Salt Lake City, UT (United States)

    2013-10-31

    Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, the approach of the entire team involving all three institutions is based on the complementary techniques of combinatorial topology and statistical modelling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modelling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. The overall technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modelling, and (3) new integrated topological and statistical methods. Roughly speaking, the division of labor between our 3 groups (Sandia Labs in Livermore, Texas A&M in College Station, and U Utah in Salt Lake City) is as follows: the Sandia group focuses on statistical methods and their formulation in algebraic terms, and finds the application problems (and data sets) most relevant to this project, the Texas A&M Group develops new algebraic geometry algorithms, in particular with fewnomial theory, and the Utah group develops new algorithms in computational topology via Discrete Morse Theory. However, we hasten to point out that our three groups stay in tight contact via videconference every 2 weeks, so there is much synergy of ideas between the groups. The following of this document is focused on the contributions that had grater direct involvement from the team at the University of Utah in Salt Lake City.

  5. All of statistics a concise course in statistical inference

    CERN Document Server

    Wasserman, Larry

    2004-01-01

    This book is for people who want to learn probability and statistics quickly It brings together many of the main ideas in modern statistics in one place The book is suitable for students and researchers in statistics, computer science, data mining and machine learning This book covers a much wider range of topics than a typical introductory text on mathematical statistics It includes modern topics like nonparametric curve estimation, bootstrapping and classification, topics that are usually relegated to follow-up courses The reader is assumed to know calculus and a little linear algebra No previous knowledge of probability and statistics is required The text can be used at the advanced undergraduate and graduate level Larry Wasserman is Professor of Statistics at Carnegie Mellon University He is also a member of the Center for Automated Learning and Discovery in the School of Computer Science His research areas include nonparametric inference, asymptotic theory, causality, and applications to astrophysics, bi...

  6. Visuanimation in statistics

    KAUST Repository

    Genton, Marc G.

    2015-04-14

    This paper explores the use of visualization through animations, coined visuanimation, in the field of statistics. In particular, it illustrates the embedding of animations in the paper itself and the storage of larger movies in the online supplemental material. We present results from statistics research projects using a variety of visuanimations, ranging from exploratory data analysis of image data sets to spatio-temporal extreme event modelling; these include a multiscale analysis of classification methods, the study of the effects of a simulated explosive volcanic eruption and an emulation of climate model output. This paper serves as an illustration of visuanimation for future publications in Stat. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Workload and cortisol levels in helicopter combat pilots during simulated flights

    Directory of Open Access Journals (Sweden)

    A. García-Mas

    2016-03-01

    Conclusions: Cortisol levels in saliva and workload are the usual in stress situations, and change inversely: workload increases at the end of the task, whereas the cortisol levels decrease after the simulated flight. The somatic anxiety decreases as the task is done. In contrast, when the pilots are faced with new and demanding tasks, even if they fly this type of helicopter in different conditions, the workload increases toward the end of the task. From an applied point of view, these findings should impact the tactical, physical and mental training of such pilots.

  8. Trauma Simulation Training Increases Confidence Levels in Prehospital Personnel Performing Life-Saving Interventions in Trauma Patients

    Directory of Open Access Journals (Sweden)

    Christine M. Van Dillen

    2016-01-01

    Full Text Available Introduction. Limited evidence is available on simulation training of prehospital care providers, specifically the use of tourniquets and needle decompression. This study focused on whether the confidence level of prehospital personnel performing these skills improved through simulation training. Methods. Prehospital personnel from Alachua County Fire Rescue were enrolled in the study over a 2- to 3-week period based on their availability. Two scenarios were presented to them: a motorcycle crash resulting in a leg amputation requiring a tourniquet and an intoxicated patient with a stab wound, who experienced tension pneumothorax requiring needle decompression. Crews were asked to rate their confidence levels before and after exposure to the scenarios. Timing of the simulation interventions was compared with actual scene times to determine applicability of simulation in measuring the efficiency of prehospital personnel. Results. Results were collected from 129 participants. Pre- and postexposure scores increased by a mean of 1.15 (SD 1.32; 95% CI, 0.88–1.42; P<0.001. Comparison of actual scene times with simulated scene times yielded a 1.39-fold difference (95% CI, 1.25–1.55 for Scenario 1 and 1.59 times longer for Scenario 2 (95% CI, 1.43–1.77. Conclusion. Simulation training improved prehospital care providers’ confidence level in performing two life-saving procedures.

  9. A look at the links between drainage density and flood statistics

    Directory of Open Access Journals (Sweden)

    A. Montanari

    2009-07-01

    Full Text Available We investigate the links between the drainage density of a river basin and selected flood statistics, namely, mean, standard deviation, coefficient of variation and coefficient of skewness of annual maximum series of peak flows. The investigation is carried out through a three-stage analysis. First, a numerical simulation is performed by using a spatially distributed hydrological model in order to highlight how flood statistics change with varying drainage density. Second, a conceptual hydrological model is used in order to analytically derive the dependence of flood statistics on drainage density. Third, real world data from 44 watersheds located in northern Italy were analysed. The three-level analysis seems to suggest that a critical value of the drainage density exists for which a minimum is attained in both the coefficient of variation and the absolute value of the skewness coefficient. Such minima in the flood statistics correspond to a minimum of the flood quantile for a given exceedance probability (i.e., recurrence interval. Therefore, the results of this study may provide useful indications for flood risk assessment in ungauged basins.

  10. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  11. Turbulence in collisionless plasmas: statistical analysis from numerical simulations with pressure anisotropy

    Energy Technology Data Exchange (ETDEWEB)

    Kowal, G [Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Rua do Matao 1226, 05508-900, Sao Paulo (Brazil); Falceta-Goncalves, D A; Lazarian, A, E-mail: kowal@astro.iag.usp.br [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States)

    2011-05-15

    In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e.g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of

  12. Turbulence in collisionless plasmas: statistical analysis from numerical simulations with pressure anisotropy

    International Nuclear Information System (INIS)

    Kowal, G; Falceta-Goncalves, D A; Lazarian, A

    2011-01-01

    In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e.g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of

  13. Integrating community-based verbal autopsy into civil registration and vital statistics (CRVS): system-level considerations

    Science.gov (United States)

    de Savigny, Don; Riley, Ian; Chandramohan, Daniel; Odhiambo, Frank; Nichols, Erin; Notzon, Sam; AbouZahr, Carla; Mitra, Raj; Cobos Muñoz, Daniel; Firth, Sonja; Maire, Nicolas; Sankoh, Osman; Bronson, Gay; Setel, Philip; Byass, Peter; Jakob, Robert; Boerma, Ties; Lopez, Alan D.

    2017-01-01

    ABSTRACT Background: Reliable and representative cause of death (COD) statistics are essential to inform public health policy, respond to emerging health needs, and document progress towards Sustainable Development Goals. However, less than one-third of deaths worldwide are assigned a cause. Civil registration and vital statistics (CRVS) systems in low- and lower-middle-income countries are failing to provide timely, complete and accurate vital statistics, and it will still be some time before they can provide physician-certified COD for every death. Proposals: Verbal autopsy (VA) is a method to ascertain the probable COD and, although imperfect, it is the best alternative in the absence of medical certification. There is extensive experience with VA in research settings but only a few examples of its use on a large scale. Data collection using electronic questionnaires on mobile devices and computer algorithms to analyse responses and estimate probable COD have increased the potential for VA to be routinely applied in CRVS systems. However, a number of CRVS and health system integration issues should be considered in planning, piloting and implementing a system-wide intervention such as VA. These include addressing the multiplicity of stakeholders and sub-systems involved, integration with existing CRVS work processes and information flows, linking VA results to civil registration records, information technology requirements and data quality assurance. Conclusions: Integrating VA within CRVS systems is not simply a technical undertaking. It will have profound system-wide effects that should be carefully considered when planning for an effective implementation. This paper identifies and discusses the major system-level issues and emerging practices, provides a planning checklist of system-level considerations and proposes an overview for how VA can be integrated into routine CRVS systems. PMID:28137194

  14. Statistical-Dynamical Seasonal Forecasts of Central-Southwest Asian Winter Precipitation.

    Science.gov (United States)

    Tippett, Michael K.; Goddard, Lisa; Barnston, Anthony G.

    2005-06-01

    Interannual precipitation variability in central-southwest (CSW) Asia has been associated with East Asian jet stream variability and western Pacific tropical convection. However, atmospheric general circulation models (AGCMs) forced by observed sea surface temperature (SST) poorly simulate the region's interannual precipitation variability. The statistical-dynamical approach uses statistical methods to correct systematic deficiencies in the response of AGCMs to SST forcing. Statistical correction methods linking model-simulated Indo-west Pacific precipitation and observed CSW Asia precipitation result in modest, but statistically significant, cross-validated simulation skill in the northeast part of the domain for the period from 1951 to 1998. The statistical-dynamical method is also applied to recent (winter 1998/99 to 2002/03) multimodel, two-tier December-March precipitation forecasts initiated in October. This period includes 4 yr (winter of 1998/99 to 2001/02) of severe drought. Tercile probability forecasts are produced using ensemble-mean forecasts and forecast error estimates. The statistical-dynamical forecasts show enhanced probability of below-normal precipitation for the four drought years and capture the return to normal conditions in part of the region during the winter of 2002/03.May Kabul be without gold, but not without snow.—Traditional Afghan proverb

  15. Using statistical model to simulate the impact of climate change on maize yield with climate and crop uncertainties

    Science.gov (United States)

    Zhang, Yi; Zhao, Yanxia; Wang, Chunyi; Chen, Sining

    2017-11-01

    Assessment of the impact of climate change on crop productions with considering uncertainties is essential for properly identifying and decision-making agricultural practices that are sustainable. In this study, we employed 24 climate projections consisting of the combinations of eight GCMs and three emission scenarios representing the climate projections uncertainty, and two crop statistical models with 100 sets of parameters in each model representing parameter uncertainty within the crop models. The goal of this study was to evaluate the impact of climate change on maize ( Zea mays L.) yield at three locations (Benxi, Changling, and Hailun) across Northeast China (NEC) in periods 2010-2039 and 2040-2069, taking 1976-2005 as the baseline period. The multi-models ensembles method is an effective way to deal with the uncertainties. The results of ensemble simulations showed that maize yield reductions were less than 5 % in both future periods relative to the baseline. To further understand the contributions of individual sources of uncertainty, such as climate projections and crop model parameters, in ensemble yield simulations, variance decomposition was performed. The results indicated that the uncertainty from climate projections was much larger than that contributed by crop model parameters. Increased ensemble yield variance revealed the increasing uncertainty in the yield simulation in the future periods.

  16. Quasi-monte carlo simulation and variance reduction techniques substantially reduce computational requirements of patient-level simulation models: An application to a discrete event simulation model

    NARCIS (Netherlands)

    Treur, M.; Postma, M.

    2014-01-01

    Objectives: Patient-level simulation models provide increased flexibility to overcome the limitations of cohort-based approaches in health-economic analysis. However, computational requirements of reaching convergence is a notorious barrier. The objective was to assess the impact of using

  17. Tensoral for post-processing users and simulation authors

    Science.gov (United States)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  18. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  19. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  20. A pedagogical approach to the Boltzmann factor through experiments and simulations

    International Nuclear Information System (INIS)

    Battaglia, O R; Bonura, A; Sperandeo-Mineo, R M

    2009-01-01

    The Boltzmann factor is the basis of a huge amount of thermodynamic and statistical physics, both classical and quantum. It governs the behaviour of all systems in nature that are exchanging energy with their environment. To understand why the expression has this specific form involves a deep mathematical analysis, whose flow of logic is hard to see and is not at the level of high school or college students' preparation. We here present some experiments and simulations aimed at directly deriving its mathematical expression and illustrating the fundamental concepts on which it is grounded. Experiments use easily available apparatuses, and simulations are developed in the Net-Logo environment that, besides having a user-friendly interface, allows an easy interaction with the algorithm. The approach supplies pedagogical support for the introduction of the Boltzmann factor at the undergraduate level to students without a background in statistical mechanics.

  1. A pedagogical approach to the Boltzmann factor through experiments and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Battaglia, O R; Bonura, A; Sperandeo-Mineo, R M [University of Palermo Physics Education Research Group, Dipartimento di Fisica e Tecnologie Relative, Universita di Palermo (Italy)], E-mail: sperandeo@difter.unipa.it

    2009-09-15

    The Boltzmann factor is the basis of a huge amount of thermodynamic and statistical physics, both classical and quantum. It governs the behaviour of all systems in nature that are exchanging energy with their environment. To understand why the expression has this specific form involves a deep mathematical analysis, whose flow of logic is hard to see and is not at the level of high school or college students' preparation. We here present some experiments and simulations aimed at directly deriving its mathematical expression and illustrating the fundamental concepts on which it is grounded. Experiments use easily available apparatuses, and simulations are developed in the Net-Logo environment that, besides having a user-friendly interface, allows an easy interaction with the algorithm. The approach supplies pedagogical support for the introduction of the Boltzmann factor at the undergraduate level to students without a background in statistical mechanics.

  2. Energy statistics manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    Detailed, complete, timely and reliable statistics are essential to monitor the energy situation at a country level as well as at an international level. Energy statistics on supply, trade, stocks, transformation and demand are indeed the basis for any sound energy policy decision. For instance, the market of oil -- which is the largest traded commodity worldwide -- needs to be closely monitored in order for all market players to know at any time what is produced, traded, stocked and consumed and by whom. In view of the role and importance of energy in world development, one would expect that basic energy information to be readily available and reliable. This is not always the case and one can even observe a decline in the quality, coverage and timeliness of energy statistics over the last few years.

  3. 8th International Workshop on Simulation

    CERN Document Server

    Rasch, Dieter; Melas, Viatcheslav; Moder, Karl; Statistics and simulation

    2018-01-01

    This volume features original contributions and invited review articles on mathematical statistics, statistical simulation and experimental design. The selected peer-reviewed contributions originate from the 8th International Workshop on Simulation held in Vienna in 2015. The book is intended for mathematical statisticians, Ph.D. students and statisticians working in medicine, engineering, pharmacy, psychology, agriculture and other related fields. The International Workshops on Simulation are devoted to statistical techniques in stochastic simulation, data collection, design of scientific experiments and studies representing broad areas of interest. The first 6 workshops took place in St. Petersburg, Russia, in 1994 – 2009 and the 7th workshop was held in Rimini, Italy, in 2013.

  4. Simulation of aerosol flow interaction with a solid body on molecular level

    Science.gov (United States)

    Amelyushkin, Ivan A.; Stasenko, Albert L.

    2018-05-01

    Physico-mathematical models and numerical algorithm of two-phase flow interaction with a solid body are developed. Results of droplet motion and its impingement upon a rough surface in real gas boundary layer simulation on the molecular level obtained via molecular dynamics technique are presented.

  5. The Cosmogrid simulation: Statistical properties of small dark matter halos

    NARCIS (Netherlands)

    Ishiyama, T.; Rieder, S.; Makino, J.; Portegies Zwart, S.; Groen, D.; Nitadori, K.; de Laat, C.; McMillan, S.; Hiraki, K.; Harfst, S.

    2013-01-01

    We present the results of the "Cosmogrid" cosmological N-body simulation suites based on the concordance LCDM model. The Cosmogrid simulation was performed in a 30 Mpc box with 20483 particles. The mass of each particle is 1.28 × 105 M⊙, which is sufficient to resolve ultra-faint dwarfs. We found

  6. Practical statistics in pain research.

    Science.gov (United States)

    Kim, Tae Kyun

    2017-10-01

    Pain is subjective, while statistics related to pain research are objective. This review was written to help researchers involved in pain research make statistical decisions. The main issues are related with the level of scales that are often used in pain research, the choice of statistical methods between parametric or nonparametric statistics, and problems which arise from repeated measurements. In the field of pain research, parametric statistics used to be applied in an erroneous way. This is closely related with the scales of data and repeated measurements. The level of scales includes nominal, ordinal, interval, and ratio scales. The level of scales affects the choice of statistics between parametric or non-parametric methods. In the field of pain research, the most frequently used pain assessment scale is the ordinal scale, which would include the visual analogue scale (VAS). There used to be another view, however, which considered the VAS to be an interval or ratio scale, so that the usage of parametric statistics would be accepted practically in some cases. Repeated measurements of the same subjects always complicates statistics. It means that measurements inevitably have correlations between each other, and would preclude the application of one-way ANOVA in which independence between the measurements is necessary. Repeated measures of ANOVA (RMANOVA), however, would permit the comparison between the correlated measurements as long as the condition of sphericity assumption is satisfied. Conclusively, parametric statistical methods should be used only when the assumptions of parametric statistics, such as normality and sphericity, are established.

  7. A case study: application of statistical process control tool for determining process capability and sigma level.

    Science.gov (United States)

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical

  8. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  9. Alcohol Facts and Statistics

    Science.gov (United States)

    ... Standard Drink? Drinking Levels Defined Alcohol Facts and Statistics Print version Alcohol Use in the United States: ... 1238–1245, 2004. PMID: 15010446 National Center for Statistics and Analysis. 2014 Crash Data Key Findings (Traffic ...

  10. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  11. Development of Simulants to Support Mixing Tests for High Level Waste and Low Activity Waste

    International Nuclear Information System (INIS)

    EIBLING, RUSSELLE.

    2004-01-01

    The objectives of this study were to develop two different types of simulants to support vendor agitator design studies and mixing studies. The initial simulant development task was to develop rheologically-bounding physical simulants and the final portion was to develop a nominal chemical simulant which is designed to match, as closely as possible, the actual sludge from a tank. The physical simulants to be developed included a lower and upper rheologically bounded: pretreated low activity waste (LAW) physical simulant; LAW melter feed physical simulant; pretreated high level waste (HLW) physical simulant; HLW melter feed physical simulant. The nominal chemical simulant, hereafter referred to as the HLW Precipitated Hydroxide simulant, is designed to represent the chemical/physical composition of the actual washed and leached sludge sample. The objective was to produce a simulant which matches not only the chemical composition but also the physical properties of the actual waste sample. The HLW Precipitated Hydroxide simulant could then be used for mixing tests to validate mixing, homogeneity and representative sampling and transferring issues. The HLW Precipitated Hydroxide simulant may also be used for integrated nonradioactive testing of the WTP prior to radioactive operation

  12. Characterization of statistical prior image constrained compressed sensing (PICCS): II. Application to dose reduction

    International Nuclear Information System (INIS)

    Lauzier, Pascal Thériault; Chen Guanghong

    2013-01-01

    Purpose: The ionizing radiation imparted to patients during computed tomography exams is raising concerns. This paper studies the performance of a scheme called dose reduction using prior image constrained compressed sensing (DR-PICCS). The purpose of this study is to characterize the effects of a statistical model of x-ray detection in the DR-PICCS framework and its impact on spatial resolution. Methods: Both numerical simulations with known ground truth and in vivo animal dataset were used in this study. In numerical simulations, a phantom was simulated with Poisson noise and with varying levels of eccentricity. Both the conventional filtered backprojection (FBP) and the PICCS algorithms were used to reconstruct images. In PICCS reconstructions, the prior image was generated using two different denoising methods: a simple Gaussian blur and a more advanced diffusion filter. Due to the lack of shift-invariance in nonlinear image reconstruction such as the one studied in this paper, the concept of local spatial resolution was used to study the sharpness of a reconstructed image. Specifically, a directional metric of image sharpness, the so-called pseudopoint spread function (pseudo-PSF), was employed to investigate local spatial resolution. Results: In the numerical studies, the pseudo-PSF was reduced from twice the voxel width in the prior image down to less than 1.1 times the voxel width in DR-PICCS reconstructions when the statistical model was not included. At the same noise level, when statistical weighting was used, the pseudo-PSF width in DR-PICCS reconstructed images varied between 1.5 and 0.75 times the voxel width depending on the direction along which it was measured. However, this anisotropy was largely eliminated when the prior image was generated using diffusion filtering; the pseudo-PSF width was reduced to below one voxel width in that case. In the in vivo study, a fourfold improvement in CNR was achieved while qualitatively maintaining sharpness

  13. Uncertainty of simulated groundwater levels arising from stochastic transient climate change scenarios

    Science.gov (United States)

    Goderniaux, Pascal; Brouyère, Serge; Blenkinsop, Stephen; Burton, Aidan; Fowler, Hayley; Dassargues, Alain

    2010-05-01

    applied not only to the mean of climatic variables, but also across the statistical distributions of these variables. This is important as these distributions are expected to change in the future, with more extreme rainfall events, separated by longer dry periods. (2) The novel approach used in this study can simulate transient climate change from 2010 to 2085, rather than time series representative of a stationary climate for the period 2071-2100. (3) The weather generator is used to generate a large number of equiprobable climate change scenarios for each RCM, representative of the natural variability of the weather. All of these scenarios are applied as input to the Geer basin model to assess the projected impact of climate change on groundwater levels, the uncertainty arising for different RCM projections and the uncertainty linked to natural climatic variability. Using the output results from all scenarios, 95% confidence intervals are calculated for each year and month between 2010 and 2085. The climate change scenarios for the Geer basin model predict hotter and drier summers and warmer and wetter winters. Considering the results of this study, it is very likely that groundwater levels and surface flow rates in the Geer basin will decrease by the end of the century. This is of concern because it also means that groundwater quantities available for abstraction will also decrease. However, this study also shows that the uncertainty of these projections is relatively large compared to the projected changes so that it remains difficult to confidently determine the magnitude of the decrease. The use and combination of an integrated surface - subsurface model and stochastic climate change scenarios has never been used in previous climate change impact studies on groundwater resources. It constitutes an innovation and is an important tool for helping water managers to take decisions.

  14. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    Directory of Open Access Journals (Sweden)

    Manuel Perez Malumbres

    2013-02-01

    Full Text Available In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation, we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc., an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc..

  15. Statistical distribution of components of energy eigenfunctions: from nearly-integrable to chaotic

    International Nuclear Information System (INIS)

    Wang, Jiaozi; Wang, Wen-ge

    2016-01-01

    We study the statistical distribution of components in the non-perturbative parts of energy eigenfunctions (EFs), in which main bodies of the EFs lie. Our numerical simulations in five models show that deviation of the distribution from the prediction of random matrix theory (RMT) is useful in characterizing the process from nearly-integrable to chaotic, in a way somewhat similar to the nearest-level-spacing distribution. But, the statistics of EFs reveals some more properties, as described below. (i) In the process of approaching quantum chaos, the distribution of components shows a delay feature compared with the nearest-level-spacing distribution in most of the models studied. (ii) In the quantum chaotic regime, the distribution of components always shows small but notable deviation from the prediction of RMT in models possessing classical counterparts, while, the deviation can be almost negligible in models not possessing classical counterparts. (iii) In models whose Hamiltonian matrices possess a clear band structure, tails of EFs show statistical behaviors obviously different from those in the main bodies, while, the difference is smaller for Hamiltonian matrices without a clear band structure.

  16. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  17. Nonequilibrium statistical physics a modern perspective

    CERN Document Server

    Livi, Roberto

    2017-01-01

    Statistical mechanics has been proven to be successful at describing physical systems at thermodynamic equilibrium. Since most natural phenomena occur in nonequilibrium conditions, the present challenge is to find suitable physical approaches for such conditions: this book provides a pedagogical pathway that explores various perspectives. The use of clear language, and explanatory figures and diagrams to describe models, simulations and experimental findings makes the book a valuable resource for undergraduate and graduate students, and also for lecturers organizing teaching at varying levels of experience in the field. Written in three parts, it covers basic and traditional concepts of nonequilibrium physics, modern aspects concerning nonequilibrium phase transitions, and application-orientated topics from a modern perspective. A broad range of topics is covered, including Langevin equations, Levy processes, directed percolation, kinetic roughening and pattern formation.

  18. Evaluation of image quality and radiation dose by adaptive statistical iterative reconstruction technique level for chest CT examination.

    Science.gov (United States)

    Hong, Sun Suk; Lee, Jong-Woong; Seo, Jeong Beom; Jung, Jae-Eun; Choi, Jiwon; Kweon, Dae Cheol

    2013-12-01

    The purpose of this research is to determine the adaptive statistical iterative reconstruction (ASIR) level that enables optimal image quality and dose reduction in the chest computed tomography (CT) protocol with ASIR. A chest phantom with 0-50 % ASIR levels was scanned and then noise power spectrum (NPS), signal and noise and the degree of distortion of peak signal-to-noise ratio (PSNR) and the root-mean-square error (RMSE) were measured. In addition, the objectivity of the experiment was measured using the American College of Radiology (ACR) phantom. Moreover, on a qualitative basis, five lesions' resolution, latitude and distortion degree of chest phantom and their compiled statistics were evaluated. The NPS value decreased as the frequency increased. The lowest noise and deviation were at the 20 % ASIR level, mean 126.15 ± 22.21. As a result of the degree of distortion, signal-to-noise ratio and PSNR at 20 % ASIR level were at the highest value as 31.0 and 41.52. However, maximum absolute error and RMSE showed the lowest deviation value as 11.2 and 16. In the ACR phantom study, all ASIR levels were within acceptable allowance of guidelines. The 20 % ASIR level performed best in qualitative evaluation at five lesions of chest phantom as resolution score 4.3, latitude 3.47 and the degree of distortion 4.25. The 20 % ASIR level was proved to be the best in all experiments, noise, distortion evaluation using ImageJ and qualitative evaluation of five lesions of a chest phantom. Therefore, optimal images as well as reduce radiation dose would be acquired when 20 % ASIR level in thoracic CT is applied.

  19. Low-level tank waste simulant data base

    International Nuclear Information System (INIS)

    Lokken, R.O.

    1996-04-01

    The majority of defense wastes generated from reprocessing spent N- Reactor fuel at Hanford are stored in underground Double-shell Tanks (DST) and in older Single-Shell Tanks (SST) in the form of liquids, slurries, sludges, and salt cakes. The tank waste remediation System (TWRS) Program has the responsibility of safely managing and immobilizing these tank wastes for disposal. This report discusses three principle topics: the need for and basis for selecting target or reference LLW simulants, tanks waste analyses and simulants that have been defined, developed, and used for the GDP and activities in support of preparing and characterizing simulants for the current LLW vitrification project. The procedures and the data that were generated to characterized the LLW vitrification simulants were reported and are presented in this report. The final section of this report addresses the applicability of the data to the current program and presents recommendations for additional data needs including characterization and simulant compositional variability studies

  20. SEMICONDUCTOR INTEGRATED CIRCUITS: A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Science.gov (United States)

    Jizhi, Liu; Xingbi, Chen

    2009-12-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.

  1. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  2. Model for neural signaling leap statistics

    International Nuclear Information System (INIS)

    Chevrollier, Martine; Oria, Marcos

    2011-01-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5 0 C, awaken regime) and Levy statistics (T = 35.5 0 C, sleeping period), characterized by rare events of long range connections.

  3. Model for neural signaling leap statistics

    Science.gov (United States)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  4. Research Update: Spatially resolved mapping of electronic structure on atomic level by multivariate statistical analysis

    International Nuclear Information System (INIS)

    Belianinov, Alex; Ganesh, Panchapakesan; Lin, Wenzhi; Jesse, Stephen; Pan, Minghu; Kalinin, Sergei V.; Sales, Brian C.; Sefat, Athena S.

    2014-01-01

    Atomic level spatial variability of electronic structure in Fe-based superconductor FeTe 0.55 Se 0.45 (T c = 15 K) is explored using current-imaging tunneling-spectroscopy. Multivariate statistical analysis of the data differentiates regions of dissimilar electronic behavior that can be identified with the segregation of chalcogen atoms, as well as boundaries between terminations and near neighbor interactions. Subsequent clustering analysis allows identification of the spatial localization of these dissimilar regions. Similar statistical analysis of modeled calculated density of states of chemically inhomogeneous FeTe 1−x Se x structures further confirms that the two types of chalcogens, i.e., Te and Se, can be identified by their electronic signature and differentiated by their local chemical environment. This approach allows detailed chemical discrimination of the scanning tunneling microscopy data including separation of atomic identities, proximity, and local configuration effects and can be universally applicable to chemically and electronically inhomogeneous surfaces

  5. Random matrix theory of the energy-level statistics of disordered systems at the Anderson transition

    International Nuclear Information System (INIS)

    Canali, C.M.

    1995-09-01

    We consider a family of random matrix ensembles (RME) invariant under similarity transformations and described by the probability density P(H) exp[-TrV(H)]. Dyson's mean field theory (MFT) of the corresponding plasma model of eigenvalues is generalized to the case of weak confining potential, V(is an element of) ∼ A/2 ln 2 (is an element of). The eigenvalue statistics derived from MFT are shown to deviate substantially from the classical Wigner-Dyson statistics when A c approx. 0.4 the distribution function of the level spacings (LSDF) coincides in a large energy window with the energy LSDF of the three dimensional Anderson model at the metal-insulator transition. For the same A = A c , the RME eigenvalue-number variance is linear and its slope is equal to 0.32 ± 0.02, which is consistent with the value found for the Anderson model at the critical point. (author). 51 refs, 10 figs

  6. Age related neuromuscular changes in sEMG of m. Tibialis Anterior using higher order statistics (Gaussianity & linearity test).

    Science.gov (United States)

    Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K

    2016-08-01

    Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.

  7. Noise-level determination for discrete spectra with Gaussian or Lorentzian probability density functions

    International Nuclear Information System (INIS)

    Moriya, Netzer

    2010-01-01

    A method, based on binomial filtering, to estimate the noise level of an arbitrary, smoothed pure signal, contaminated with an additive, uncorrelated noise component is presented. If the noise characteristics of the experimental spectrum are known, as for instance the type of the corresponding probability density function (e.g., Gaussian), the noise properties can be extracted. In such cases, both the noise level, as may arbitrarily be defined, and a simulated white noise component can be generated, such that the simulated noise component is statistically indistinguishable from the true noise component present in the original signal. In this paper we present a detailed analysis of the noise level extraction when the additive noise is Gaussian or Lorentzian. We show that the statistical parameters in these cases (mainly the variance and the half width at half maximum, respectively) can directly be obtained from the experimental spectrum even when the pure signal is erratic. Further discussion is given for cases where the noise probability density function is initially unknown.

  8. MODELING PLANETARY SYSTEM FORMATION WITH N-BODY SIMULATIONS: ROLE OF GAS DISK AND STATISTICS COMPARED TO OBSERVATIONS

    International Nuclear Information System (INIS)

    Liu Huigen; Zhou Jilin; Wang Su

    2011-01-01

    During the late stage of planet formation, when Mars-sized cores appear, interactions among planetary cores can excite their orbital eccentricities, accelerate their merging, and thus sculpt their final orbital architecture. This study contributes to the final assembling of planetary systems with N-body simulations, including the type I or II migration of planets and gas accretion of massive cores in a viscous disk. Statistics on the final distributions of planetary masses, semimajor axes, and eccentricities are derived and are comparable to those of the observed systems. Our simulations predict some new orbital signatures of planetary systems around solar mass stars: 36% of the surviving planets are giant planets (>10 M + ). Most of the massive giant planets (>30 M + ) are located at 1-10 AU. Terrestrial planets are distributed more or less evenly at J in highly eccentric orbits (e > 0.3-0.4). The average eccentricity (∼0.15) of the giant planets (>10 M + ) is greater than that (∼0.05) of the terrestrial planets ( + ). A planetary system with more planets tends to have smaller planet masses and orbital eccentricities on average.

  9. Fermi-level effects in semiconductor processing: A modeling scheme for atomistic kinetic Monte Carlo simulators

    Science.gov (United States)

    Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.

    2005-09-01

    Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.

  10. Alternative Chemical Cleaning Methods for High Level Waste Tanks: Simulant Studies

    Energy Technology Data Exchange (ETDEWEB)

    Rudisill, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); King, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hay, M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-11-19

    Solubility testing with simulated High Level Waste tank heel solids has been conducted in order to evaluate two alternative chemical cleaning technologies for the dissolution of sludge residuals remaining in the tanks after the exhaustion of mechanical cleaning and sludge washing efforts. Tests were conducted with non-radioactive pure phase metal reagents, binary mixtures of reagents, and a Savannah River Site PUREX heel simulant to determine the effectiveness of an optimized, dilute oxalic/nitric acid cleaning reagent and pure, dilute nitric acid toward dissolving the bulk non-radioactive waste components. A focus of this testing was on minimization of oxalic acid additions during tank cleaning. For comparison purposes, separate samples were also contacted with pure, concentrated oxalic acid which is the current baseline chemical cleaning reagent. In a separate study, solubility tests were conducted with radioactive tank heel simulants using acidic and caustic permanganate-based methods focused on the “targeted” dissolution of actinide species known to be drivers for Savannah River Site tank closure Performance Assessments. Permanganate-based cleaning methods were evaluated prior to and after oxalic acid contact.

  11. Work in process level definition: a method based on computer simulation and electre tri

    Directory of Open Access Journals (Sweden)

    Isaac Pergher

    2014-09-01

    Full Text Available This paper proposes a method for defining the levels of work in progress (WIP in productive environments managed by constant work in process (CONWIP policies. The proposed method combines the approaches of Computer Simulation and Electre TRI to support estimation of the adequate level of WIP and is presented in eighteen steps. The paper also presents an application example, performed on a metalworking company. The research method is based on Computer Simulation, supported by quantitative data analysis. The main contribution of the paper is its provision of a structured way to define inventories according to demand. With this method, the authors hope to contribute to the establishment of better capacity plans in production environments.

  12. A dynamic simulation model of the Savannah River Site high level waste complex

    International Nuclear Information System (INIS)

    Gregory, M.V.; Aull, J.E.; Dimenna, R.A.

    1994-01-01

    A detailed, dynamic simulation entire high level radioactive waste complex at the Savannah River Site has been developed using SPEEDUP(tm) software. The model represents mass transfer, evaporation, precipitation, sludge washing, effluent treatment, and vitrification unit operation processes through the solution of 7800 coupled differential and algebraic equations. Twenty-seven discrete chemical constituents are tracked through the unit operations. The simultaneous simultaneous simulation of concurrent batch and continuous processes is achieved by several novel, customized SPEEDUP(tm) algorithms. Due to the model's computational burden, a high-end work station is required: simulation of a years operation of the complex requires approximately three CPU hours on an IBM RS/6000 Model 590 processor. The model will be used to develop optimal high level waste (HLW) processing strategies over a thirty year time horizon. It will be employed to better understand the dynamic inter-relationships between different HLW unit operations, and to suggest strategies that will maximize available working tank space during the early years of operation and minimize overall waste processing cost over the long-term history of the complex. Model validation runs are currently underway with comparisons against actual plant operating data providing an excellent match

  13. Assessing the Lexico-Grammatical Characteristics of a Corpus of College-Level Statistics Textbooks: Implications for Instruction and Practice

    Science.gov (United States)

    Wagler, Amy E.; Lesser, Lawrence M.; González, Ariel I.; Leal, Luis

    2015-01-01

    A corpus of current editions of statistics textbooks was assessed to compare aspects and levels of readability for the topics of "measures of center," "line of fit," "regression analysis," and "regression inference." Analysis with lexical software of these text selections revealed that the large corpus can…

  14. Computer simulation of two-level pedicle subtraction osteotomy for severe thoracolumbar kyphosis in ankylosing spondylitis

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2017-01-01

    Full Text Available Background: Advanced ankylosing spondylitis is often associated with thoracolumbar kyphosis, resulting in an abnormal spinopelvic balance and pelvic morphology. Different osteotomy techniques have been used to correct AS deformities, unfortunnaly, not all AS patients can gain spinal sagittal balance and good horizontal vision after osteotomy. Materials and Methods: Fourteen consecutive AS patients with severe thoracolumbar kyphosis who were treated with two-level PSO were studied retrospectively. All were male with a mean age of 34.9 ± 9.6 years. The followup ranged from 1–5 years. Preoperative computer simulations using the Surgimap Spinal software were performed for all patients, and the osteotomy level and angle determined from the computer simulation were used surgically. Spinal sagittal parameters were measured preoperatively, after the computer simulation, and postoperatively and included thoracic kyphosis (TK, lumbar lordosis (LL, sagittal vertical axis (SVA, pelvic incidence, pelvic tilt (PT, and sacral slope (SS. The level of correlation between the computer simulation and postoperative parameters was evaluated, and the differences between preoperative and postoperative parameters were compared. The visual analog scale (VAS for back pain and clinical outcome was also assessed. Results: Six cases underwent PSO at L1 and L3, five cases at L2 and T12, and three cases at L3 and T12. TK was corrected from 57.8 ± 15.2° preoperatively to 45.3 ± 7.7° postoperatively (P < 0.05, LL from 9.3 ± 17.5° to −52.3 ± 3.9° (P < 0.001, SVA from 154.5 ± 36.7 to 37.8 ± 8.4 mm (P < 0.001, PT from 43.3 ± 6.1° to 18.0 ± 0.9° (P < 0.001, and SS from 0.8 ± 7.0° to 26.5 ± 10.6° (P < 0.001. The LL, VAS, and PT of the simulated two-level PSO were highly consistent with, or almost the same as, the postoperative parameters. The correlations between the computer simulations and postoperative parameters were significant. The VAS decreased

  15. Process-based modelling to evaluate simulated groundwater levels and frequencies in a Chalk catchment in south-western England

    Science.gov (United States)

    Brenner, Simon; Coxon, Gemma; Howden, Nicholas J. K.; Freer, Jim; Hartmann, Andreas

    2018-02-01

    Chalk aquifers are an important source of drinking water in the UK. Due to their properties, they are particularly vulnerable to groundwater-related hazards like floods and droughts. Understanding and predicting groundwater levels is therefore important for effective and safe water management. Chalk is known for its high porosity and, due to its dissolvability, exposed to karstification and strong subsurface heterogeneity. To cope with the karstic heterogeneity and limited data availability, specialised modelling approaches are required that balance model complexity and data availability. In this study, we present a novel approach to evaluate simulated groundwater level frequencies derived from a semi-distributed karst model that represents subsurface heterogeneity by distribution functions. Simulated groundwater storages are transferred into groundwater levels using evidence from different observations wells. Using a percentile approach we can assess the number of days exceeding or falling below selected groundwater level percentiles. Firstly, we evaluate the performance of the model when simulating groundwater level time series using a spilt sample test and parameter identifiability analysis. Secondly, we apply a split sample test to the simulated groundwater level percentiles to explore the performance in predicting groundwater level exceedances. We show that the model provides robust simulations of discharge and groundwater levels at three observation wells at a test site in a chalk-dominated catchment in south-western England. The second split sample test also indicates that the percentile approach is able to reliably predict groundwater level exceedances across all considered timescales up to their 75th percentile. However, when looking at the 90th percentile, it only provides acceptable predictions for long time periods and it fails when the 95th percentile of groundwater exceedance levels is considered. By modifying the historic forcings of our model

  16. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    Science.gov (United States)

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  17. Treatment simulations with a statistical deformable motion model to evaluate margins for multiple targets in radiotherapy for high-risk prostate cancer

    International Nuclear Information System (INIS)

    Thörnqvist, Sara; Hysing, Liv B.; Zolnay, Andras G.; Söhn, Matthias; Hoogeman, Mischa S.; Muren, Ludvig P.; Bentzen, Lise; Heijmen, Ben J.M.

    2013-01-01

    Background and purpose: Deformation and correlated target motion remain challenges for margin recipes in radiotherapy (RT). This study presents a statistical deformable motion model for multiple targets and applies it to margin evaluations for locally advanced prostate cancer i.e. RT of the prostate (CTV-p), seminal vesicles (CTV-sv) and pelvic lymph nodes (CTV-ln). Material and methods: The 19 patients included in this study, all had 7–10 repeat CT-scans available that were rigidly aligned with the planning CT-scan using intra-prostatic implanted markers, followed by deformable registrations. The displacement vectors from the deformable registrations were used to create patient-specific statistical motion models. The models were applied in treatment simulations to determine probabilities for adequate target coverage, e.g. by establishing distributions of the accumulated dose to 99% of the target volumes (D 99 ) for various CTV–PTV expansions in the planning-CTs. Results: The method allowed for estimation of the expected accumulated dose and its variance of different DVH parameters for each patient. Simulations of inter-fractional motion resulted in 7, 10, and 18 patients with an average D 99 >95% of the prescribed dose for CTV-p expansions of 3 mm, 4 mm and 5 mm, respectively. For CTV-sv and CTV-ln, expansions of 3 mm, 5 mm and 7 mm resulted in 1, 11 and 15 vs. 8, 18 and 18 patients respectively with an average D 99 >95% of the prescription. Conclusions: Treatment simulations of target motion revealed large individual differences in accumulated dose mainly for CTV-sv, demanding the largest margins whereas those required for CTV-p and CTV-ln were comparable

  18. Statistical decay of giant resonances

    International Nuclear Information System (INIS)

    Dias, H.; Teruya, N.; Wolynec, E.

    1986-01-01

    Statistical calculations to predict the neutron spectrum resulting from the decay of Giant Resonances are discussed. The dependence of the resutls on the optical potential parametrization and on the level density of the residual nucleus is assessed. A Hauser-Feshbach calculation is performed for the decay of the monople giant resonance in 208 Pb using the experimental levels of 207 Pb from a recent compilation. The calculated statistical decay is in excelent agreement with recent experimental data, showing that the decay of this resonance is dominantly statistical, as predicted by continuum RPA calculations. (Author) [pt

  19. Statistical decay of giant resonances

    International Nuclear Information System (INIS)

    Dias, H.; Teruya, N.; Wolynec, E.

    1986-02-01

    Statistical calculations to predict the neutron spectrum resulting from the decay of Giant Resonances are discussed. The dependence of the results on the optical potential parametrization and on the level density of the residual nucleus is assessed. A Hauser-Feshbach calculation is performed for the decay of the monopole giant resonance in 208 Pb using the experimental levels of 207 Pb from a recent compilation. The calculated statistical decay is in excellent agreement with recent experimental data, showing that decay of this resonance is dominantly statistical, as predicted by continuum RPA calculations. (Author) [pt

  20. Monte Carlo molecular simulations: improving the statistical efficiency of samples with the help of artificial evolution algorithms; Simulations moleculaires de Monte Carlo: amelioration de l'efficacite statistique de l'echantillonnage grace aux algorithmes d'evolution artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, B.

    2002-03-01

    Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)

  1. Pseudo-populations a basic concept in statistical surveys

    CERN Document Server

    Quatember, Andreas

    2015-01-01

    This book emphasizes that artificial or pseudo-populations play an important role in statistical surveys from finite universes in two manners: firstly, the concept of pseudo-populations may substantially improve users’ understanding of various aspects in the sampling theory and survey methodology; an example of this scenario is the Horvitz-Thompson estimator. Secondly, statistical procedures exist in which pseudo-populations actually have to be generated. An example of such a scenario can be found in simulation studies in the field of survey sampling, where close-to-reality pseudo-populations are generated from known sample and population data to form the basis for the simulation process. The chapters focus on estimation methods, sampling techniques, nonresponse, questioning designs and statistical disclosure control.This book is a valuable reference in understanding the importance of the pseudo-population concept and applying it in teaching and research.

  2. Sub-Poissonian statistics of quantum jumps in single molecule or atomic ion

    International Nuclear Information System (INIS)

    Osad'ko, I.S.; Gus'kov, D.N.

    2007-01-01

    A theory for statistics of quantum jumps in single molecule or ion driven by continues wave laser field is developed. These quantum jumps can relate to nonradiative singlet-triplet transitions in a molecule or to on → off jumps in a single ion with shelving processes. Distribution function w N (T) of quantum jumps in time interval T is found. Computer simulation of quantum jumps is realized. Statistical treatment of simulated jumps reveals sub-Poissonian statistics of quantum jumps. The theoretical distribution function w N (T) fits well the distribution of jumps found from simulated data. Experimental data on quantum jumps found in experiments with single Hg + ion are described by the function w N (T) well

  3. Statistics of resonances in a one-dimensional chain: a weak disorder limit

    International Nuclear Information System (INIS)

    Vinayak

    2012-01-01

    We study statistics of resonances in a one-dimensional disordered chain coupled to an outer world simulated by a perfect lead. We consider a limiting case for weak disorder and derive some results which are new in these studies. The main focus of this study is to describe the statistics of the scattered complex energies. We derive compact analytic statistical results for long chains. A comparison of these results has been found to be in good agreement with numerical simulations. (paper)

  4. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  5. Model for neural signaling leap statistics

    Energy Technology Data Exchange (ETDEWEB)

    Chevrollier, Martine; Oria, Marcos, E-mail: oria@otica.ufpb.br [Laboratorio de Fisica Atomica e Lasers Departamento de Fisica, Universidade Federal da ParaIba Caixa Postal 5086 58051-900 Joao Pessoa, Paraiba (Brazil)

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5{sup 0}C, awaken regime) and Levy statistics (T = 35.5{sup 0}C, sleeping period), characterized by rare events of long range connections.

  6. Probability and statistics for computer science

    CERN Document Server

    Johnson, James L

    2011-01-01

    Comprehensive and thorough development of both probability and statistics for serious computer scientists; goal-oriented: ""to present the mathematical analysis underlying probability results"" Special emphases on simulation and discrete decision theory Mathematically-rich, but self-contained text, at a gentle pace Review of calculus and linear algebra in an appendix Mathematical interludes (in each chapter) which examine mathematical techniques in the context of probabilistic or statistical importance Numerous section exercises, summaries, historical notes, and Further Readings for reinforcem

  7. Quantum simulation with natural decoherence

    International Nuclear Information System (INIS)

    Tseng, C. H.; Somaroo, S.; Sharf, Y.; Knill, E.; Laflamme, R.; Havel, T. F.; Cory, D. G.

    2000-01-01

    A quantum system may be efficiently simulated by a quantum information processor as suggested by Feynman and developed by Lloyd, Wiesner, and Zalka. Within the limits of the experimental implementation, simulation permits the design and control of the kinematic and dynamic parameters of a quantum system. Extension to the inclusion of the effects of decoherence, if approached from a full quantum-mechanical treatment of the system and the environment, or from a semiclassical fluctuating field treatment (Langevin), requires the difficult access to dynamics on the time scale of the environment correlation time. Alternatively, a quantum-statistical approach may be taken which exploits the natural decoherence of the experimental system, and requires a more modest control of the dynamics. This is illustrated for quantum simulations of a four-level quantum system by a two-spin NMR ensemble quantum information processor. (c) 2000 The American Physical Society

  8. Simulation and controller design for an agricultural sprayer boom leveling system

    KAUST Repository

    Sun, Jian

    2011-01-01

    According to the agricultural precision requirements, the distance from sprayer nozzles to the corps should be kept between 50 cm to 70 cm. The sprayer boom also needs to be kept parallel to the field during the operation process. Thus we can guarantee the quality of the chemical droplets distribution on the crops. In this paper we introduced a sprayer boom leveling system for agricultural sprayer vehicles with electro-hydraulic auto-leveling system. The suitable hydraulic actuating cylinder and valve were selected according to the specific systemic specifications. Furthermore, a compensation controller for the electro-hydraulic system was designed based on the mathematical model. With simulations we can optimize the performance of this controller to make sure a fast leveling response to the inclined sprayer boom. © 2011 IEEE.

  9. Statistics Anxiety and Instructor Immediacy

    Science.gov (United States)

    Williams, Amanda S.

    2010-01-01

    The purpose of this study was to investigate the relationship between instructor immediacy and statistics anxiety. It was predicted that students receiving immediacy would report lower levels of statistics anxiety. Using a pretest-posttest-control group design, immediacy was measured using the Instructor Immediacy scale. Statistics anxiety was…

  10. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  11. Applied statistical thermodynamics

    CERN Document Server

    Lucas, Klaus

    1991-01-01

    The book guides the reader from the foundations of statisti- cal thermodynamics including the theory of intermolecular forces to modern computer-aided applications in chemical en- gineering and physical chemistry. The approach is new. The foundations of quantum and statistical mechanics are presen- ted in a simple way and their applications to the prediction of fluid phase behavior of real systems are demonstrated. A particular effort is made to introduce the reader to expli- cit formulations of intermolecular interaction models and to show how these models influence the properties of fluid sy- stems. The established methods of statistical mechanics - computer simulation, perturbation theory, and numerical in- tegration - are discussed in a style appropriate for newcom- ers and are extensively applied. Numerous worked examples illustrate how practical calculations should be carried out.

  12. Gregor Mendel, His Experiments and Their Statistical Evaluation

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 99, č. 1 (2014), s. 87-99 ISSN 1211-8788 Institutional support: RVO:67985807 Keywords : Mendel * history of genetics * Mendel-Fisher controversy * statistical analysis * binomial distribution * numerical simulation Subject RIV: BB - Applied Statistics, Operational Research http://www.mzm.cz/fileadmin/user_upload/publikace/casopisy/amm_sb_99_1_2014/08kalina.pdf

  13. Evaluation of higher order statistics parameters for multi channel sEMG using different force levels.

    Science.gov (United States)

    Naik, Ganesh R; Kumar, Dinesh K

    2011-01-01

    The electromyograpy (EMG) signal provides information about the performance of muscles and nerves. The shape of the muscle signal and motor unit action potential (MUAP) varies due to the movement of the position of the electrode or due to changes in contraction level. This research deals with evaluating the non-Gaussianity in Surface Electromyogram signal (sEMG) using higher order statistics (HOS) parameters. To achieve this, experiments were conducted for four different finger and wrist actions at different levels of Maximum Voluntary Contractions (MVCs). Our experimental analysis shows that at constant force and for non-fatiguing contractions, probability density functions (PDF) of sEMG signals were non-Gaussian. For lesser MVCs (below 30% of MVC) PDF measures tends to be Gaussian process. The above measures were verified by computing the Kurtosis values for different MVCs.

  14. Online Statistics Labs in MSW Research Methods Courses: Reducing Reluctance toward Statistics

    Science.gov (United States)

    Elliott, William; Choi, Eunhee; Friedline, Terri

    2013-01-01

    This article presents results from an evaluation of an online statistics lab as part of a foundations research methods course for master's-level social work students. The article discusses factors that contribute to an environment in social work that fosters attitudes of reluctance toward learning and teaching statistics in research methods…

  15. High-Fidelity Simulation in Occupational Therapy Curriculum: Impact on Level II Fieldwork Performance

    Directory of Open Access Journals (Sweden)

    Rebecca Ozelie

    2016-10-01

    Full Text Available Simulation experiences provide experiential learning opportunities during artificially produced real-life medical situations in a safe environment. Evidence supports using simulation in health care education yet limited quantitative evidence exists in occupational therapy. This study aimed to evaluate the differences in scores on the AOTA Fieldwork Performance Evaluation for the Occupational Therapy Student of Level II occupational therapy students who received high-fidelity simulation training and students who did not. A retrospective analysis of 180 students from a private university was used. Independent samples nonparametric t tests examined mean differences between Fieldwork Performance Evaluation scores of those who did and did not receive simulation experiences in the curriculum. Mean ranks were also analyzed for subsection scores and practice settings. Results of this study found no significant difference in overall Fieldwork Performance Evaluation scores between the two groups. The students who completed simulation and had fieldwork in inpatient rehabilitation had the greatest increase in mean rank scores and increases in several subsections. The outcome measure used in this study was found to have limited discriminatory capability and may have affected the results; however, this study finds that using simulation may be a beneficial supplement to didactic coursework in occupational therapy curriculums.

  16. A log-Weibull spatial scan statistic for time to event data.

    Science.gov (United States)

    Usman, Iram; Rosychuk, Rhonda J

    2018-06-13

    Spatial scan statistics have been used for the identification of geographic clusters of elevated numbers of cases of a condition such as disease outbreaks. These statistics accompanied by the appropriate distribution can also identify geographic areas with either longer or shorter time to events. Other authors have proposed the spatial scan statistics based on the exponential and Weibull distributions. We propose the log-Weibull as an alternative distribution for the spatial scan statistic for time to events data and compare and contrast the log-Weibull and Weibull distributions through simulation studies. The effect of type I differential censoring and power have been investigated through simulated data. Methods are also illustrated on time to specialist visit data for discharged patients presenting to emergency departments for atrial fibrillation and flutter in Alberta during 2010-2011. We found northern regions of Alberta had longer times to specialist visit than other areas. We proposed the spatial scan statistic for the log-Weibull distribution as a new approach for detecting spatial clusters for time to event data. The simulation studies suggest that the test performs well for log-Weibull data.

  17. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    International Nuclear Information System (INIS)

    Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David

    2006-01-01

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  18. A comparison of educational strategies for the acquisition of nursing student's performance and critical thinking: simulation-based training vs. integrated training (simulation and critical thinking strategies).

    Science.gov (United States)

    Zarifsanaiey, Nahid; Amini, Mitra; Saadat, Farideh

    2016-11-16

    There is a need to change the focus of nursing education from traditional teacher-centered training programs to student-centered active methods. The integration of the two active learning techniques will improve the effectiveness of training programs. The objective of this study is to compare the effects of the integrated training (simulation and critical thinking strategies) and simulation-based training on the performance level and critical thinking ability of nursing students. The present quasi-experimental study was performed in 2014 on 40 students who were studying practical nursing principles and skills course in the first half of the academic year in Shiraz University of Medical Sciences. Students were randomly divided into control (n = 20) and experimental (n = 20) groups. After training students through simulation and integrated education (simulation and critical thinking strategies), the students' critical thinking ability and performance were evaluated via the use of California Critical Thinking Ability Questionnaire B (CCTST) and Objective Structured Clinical Examination (OSCE) comprising 10 stations, respectively. The external reliability of the California Critical Thinking questionnaire was reported by Case B.to be between 0.78 and 0.80 and the validity of OSCE was approved by 5 members of the faculty. Furthermore, by using Split Half method (the correlation between odd and even stations), the reliability of the test was approved with correlation coefficient of 0.66. Data were analyzed using t-test and Mann-Whitney test. A significance level of 0.05 was considered to be statistically significant. The mean scores of the experimental group performance level were higher than the mean score of the control group performance level. This difference was statistically significant and students in the experimental group in OSCE stations had significantly higher performance than the control group (P critical thinking did not increase before and after the

  19. West Valley high-level nuclear waste glass development: a statistically designed mixture study

    Energy Technology Data Exchange (ETDEWEB)

    Chick, L.A.; Bowen, W.M.; Lokken, R.O.; Wald, J.W.; Bunnell, L.R.; Strachan, D.M.

    1984-10-01

    The first full-scale conversion of high-level commercial nuclear wastes to glass in the United States will be conducted at West Valley, New York, by West Valley Nuclear Services Company, Inc. (WVNS), for the US Department of Energy. Pacific Northwest Laboratory (PNL) is supporting WVNS in the design of the glass-making process and the chemical formulation of the glass. This report describes the statistically designed study performed by PNL to develop the glass composition recommended for use at West Valley. The recommended glass contains 28 wt% waste, as limited by process requirements. The waste loading and the silica content (45 wt%) are similar to those in previously developed waste glasses; however, the new formulation contains more calcium and less boron. A series of tests verified that the increased calcium results in improved chemical durability and does not adversely affect the other modeled properties. The optimization study assessed the effects of seven oxide components on glass properties. Over 100 melts combining the seven components into a wide variety of statistically chosen compositions were tested. Viscosity, electrical conductivity, thermal expansion, crystallinity, and chemical durability were measured and empirically modeled as a function of the glass composition. The mathematical models were then used to predict the optimum formulation. This glass was tested and adjusted to arrive at the final composition recommended for use at West Valley. 56 references, 49 figures, 18 tables.

  20. Molecular simulation studies on thermophysical properties with application to working fluids

    CERN Document Server

    Raabe, Gabriele

    2017-01-01

    This book discusses the fundamentals of molecular simulation, starting with the basics of statistical mechanics and providing introductions to Monte Carlo and molecular dynamics simulation techniques. It also offers an overview of force-field models for molecular simulations and their parameterization, with a discussion of specific aspects. The book then summarizes the available know-how for analyzing molecular simulation outputs to derive information on thermophysical and structural properties. Both the force-field modeling and the analysis of simulation outputs are illustrated by various examples. Simulation studies on recently introduced HFO compounds as working fluids for different technical applications demonstrate the value of molecular simulations in providing predictions for poorly understood compounds and gaining a molecular-level understanding of their properties. This book will prove a valuable resource to researchers and students alike.

  1. Real - time Dynamic Simulation and Prediction of Groundwater in Typical Arid Area Based on SPASS Improvement

    Science.gov (United States)

    Wang, Xiao-ming

    2018-03-01

    The establishment of traditional groundwater numerical simulation model, parameter identification and inspection process, especially the water level fitting and the actual observation of the value obtained compared to a large error. Based on the SPASS software, a large number of statistical analysis of the numerical simulation results show that the complexity of the terrain in the study area, the distribution of lithology and the influence of the parameters on the groundwater level in the study area have great influence on the groundwater level. Through the multi-factor analysis and adjustment, the simulated groundwater flow and the actual observation are similar. Then, the final result is taken as the standard value, and the groundwater in the study area is simulated and predicted in real time. The simulation results provide technical support for the further development and utilization of the local water resources.

  2. Rainfall Downscaling Conditional on Upper-air Atmospheric Predictors: Improved Assessment of Rainfall Statistics in a Changing Climate

    Science.gov (United States)

    Langousis, Andreas; Mamalakis, Antonis; Deidda, Roberto; Marrocu, Marino

    2015-04-01

    To improve the level skill of Global Climate Models (GCMs) and Regional Climate Models (RCMs) in reproducing the statistics of rainfall at a basin level and at hydrologically relevant temporal scales (e.g. daily), two types of statistical approaches have been suggested. One is the statistical correction of climate model rainfall outputs using historical series of precipitation. The other is the use of stochastic models of rainfall to conditionally simulate precipitation series, based on large-scale atmospheric predictors produced by climate models (e.g. geopotential height, relative vorticity, divergence, mean sea level pressure). The latter approach, usually referred to as statistical rainfall downscaling, aims at reproducing the statistical character of rainfall, while accounting for the effects of large-scale atmospheric circulation (and, therefore, climate forcing) on rainfall statistics. While promising, statistical rainfall downscaling has not attracted much attention in recent years, since the suggested approaches involved complex (i.e. subjective or computationally intense) identification procedures of the local weather, in addition to demonstrating limited success in reproducing several statistical features of rainfall, such as seasonal variations, the distributions of dry and wet spell lengths, the distribution of the mean rainfall intensity inside wet periods, and the distribution of rainfall extremes. In an effort to remedy those shortcomings, Langousis and Kaleris (2014) developed a statistical framework for simulation of daily rainfall intensities conditional on upper air variables, which accurately reproduces the statistical character of rainfall at multiple time-scales. Here, we study the relative performance of: a) quantile-quantile (Q-Q) correction of climate model rainfall products, and b) the statistical downscaling scheme of Langousis and Kaleris (2014), in reproducing the statistical structure of rainfall, as well as rainfall extremes, at a

  3. Analogue circuits simulation

    Energy Technology Data Exchange (ETDEWEB)

    Mendo, C

    1988-09-01

    Most analogue simulators have evolved from SPICE. The history and description of SPICE-like simulators are given. From a mathematical formulation of the electronic circuit the following analysis are possible: DC, AC, transient, noise, distortion, Worst Case and Statistical.

  4. Statistical error of spin transfer to hyperon at RHIC energy

    International Nuclear Information System (INIS)

    Han Ran; Mao Yajun

    2009-01-01

    From the RHIC/PHENIX experiment data, it is found that the statistical error of spin transfer is few times larger than the statistical error of the single spin asymmetry. In order to verify the difference between σDLL and σAL, the linear least squares method was used to check it first, and then a simple Monte-Carlo simulation to test this factor again. The simulation is consistent with the calculation result which indicates that the few times difference is reasonable. (authors)

  5. Statistical characterization of wave propagation in mine environments

    KAUST Repository

    Bakir, Onur

    2012-07-01

    A computational framework for statistically characterizing electromagnetic (EM) wave propagation through mine tunnels and galleries is presented. The framework combines a multi-element probabilistic collocation (ME-PC) method with a novel domain-decomposition (DD) integral equation-based EM simulator to obtain statistics of electric fields due to wireless transmitters in realistic mine environments. © 2012 IEEE.

  6. RHAPSODY. I. STRUCTURAL PROPERTIES AND FORMATION HISTORY FROM A STATISTICAL SAMPLE OF RE-SIMULATED CLUSTER-SIZE HALOS

    International Nuclear Information System (INIS)

    Wu, Hao-Yi; Hahn, Oliver; Wechsler, Risa H.; Mao, Yao-Yuan; Behroozi, Peter S.

    2013-01-01

    We present the first results from the RHAPSODY cluster re-simulation project: a sample of 96 'zoom-in' simulations of dark matter halos of 10 14.8±0.05 h –1 M ☉ , selected from a 1 h –3 Gpc 3 volume. This simulation suite is the first to resolve this many halos with ∼5 × 10 6 particles per halo in the cluster mass regime, allowing us to statistically characterize the distribution of and correlation between halo properties at fixed mass. We focus on the properties of the main halos and how they are affected by formation history, which we track back to z = 12, over five decades in mass. We give particular attention to the impact of the formation history on the density profiles of the halos. We find that the deviations from the Navarro-Frenk-White (NFW) model and the Einasto model depend on formation time. Late-forming halos tend to have considerable deviations from both models, partly due to the presence of massive subhalos, while early-forming halos deviate less but still significantly from the NFW model and are better described by the Einasto model. We find that the halo shapes depend only moderately on formation time. Departure from spherical symmetry impacts the density profiles through the anisotropic distribution of massive subhalos. Further evidence of the impact of subhalos is provided by analyzing the phase-space structure. A detailed analysis of the properties of the subhalo population in RHAPSODY is presented in a companion paper.

  7. Population activity statistics dissect subthreshold and spiking variability in V1.

    Science.gov (United States)

    Bányai, Mihály; Koman, Zsombor; Orbán, Gergő

    2017-07-01

    Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of

  8. Statistical Model Checking for Biological Systems

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel

    2014-01-01

    Statistical Model Checking (SMC) is a highly scalable simulation-based verification approach for testing and estimating the probability that a stochastic system satisfies a given linear temporal property. The technique has been applied to (discrete and continuous time) Markov chains, stochastic...

  9. Simulation Experiment Description Markup Language (SED-ML Level 1 Version 3 (L1V3

    Directory of Open Access Journals (Sweden)

    Bergmann Frank T.

    2018-03-01

    Full Text Available The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML is an XML-based format that encodes, for a given simulation experiment, (i which models to use; (ii which modifications to apply to models before simulation; (iii which simulation procedures to run on each model; (iv how to post-process the data; and (v how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1 implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  10. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar

    2018-03-19

    The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  11. Statistical characteristics of falling-film flows: A synergistic approach at the crossroads of direct numerical simulations and experiments

    Science.gov (United States)

    Charogiannis, Alexandros; Denner, Fabian; van Wachem, Berend G. M.; Kalliadasis, Serafim; Markides, Christos N.

    2017-12-01

    We scrutinize the statistical characteristics of liquid films flowing over an inclined planar surface based on film height and velocity measurements that are recovered simultaneously by application of planar laser-induced fluorescence (PLIF) and particle tracking velocimetry (PTV), respectively. Our experiments are complemented by direct numerical simulations (DNSs) of liquid films simulated for different conditions so as to expand the parameter space of our investigation. Our statistical analysis builds upon a Reynolds-like decomposition of the time-varying flow rate that was presented in our previous research effort on falling films in [Charogiannis et al., Phys. Rev. Fluids 2, 014002 (2017), 10.1103/PhysRevFluids.2.014002], and which reveals that the dimensionless ratio of the unsteady term to the mean flow rate increases linearly with the product of the coefficients of variation of the film height and bulk velocity, as well as with the ratio of the Nusselt height to the mean film height, both at the same upstream PLIF/PTV measurement location. Based on relations that are derived to describe these results, a methodology for predicting the mass-transfer capability (through the mean and standard deviation of the bulk flow speed) of these flows is developed in terms of the mean and standard deviation of the film thickness and the mean flow rate, which are considerably easier to obtain experimentally than velocity profiles. The errors associated with these predictions are estimated at ≈1.5 % and 8% respectively in the experiments and at <1 % and <2 % respectively in the DNSs. Beyond the generation of these relations for the prediction of important film flow characteristics based on simple flow information, the data provided can be used to design improved heat- and mass-transfer equipment reactors or other process operation units which exploit film flows, but also to develop and validate multiphase flow models in other physical and technological settings.

  12. Simulation of coastal floodings during a typhoon event with the consideration of future sea-level rises.

    Science.gov (United States)

    Shu-Huei, Jhang; Chih-Chung, Wen; Dong-Jiing, Doong; Cheng-Han, Tsai

    2017-04-01

    Taiwan is an Island in the western Pacific Ocean and experienced more than 3 typhoons in a year. Typhoons bring intense rainfall, high waves, and storm surges, which often resulted in coastal flooding. The flooding can be aggravated by the sea level rise due to the global warming, which may subject Taiwan's coastal areas to more serious damage in the future than present. The objectives of this study are to investigate the flooding caused by typhoons in the Annan District, Tainan, a city on the southwest coast of Taiwan by numerical simulations, considering the effects of sea-level rises according to the level suggested by the 5th Assessment Report of IPCC (Intergovernmental Panel on Climate Change) for 2050 and 2100, respectively. The simulations were carried out by using MIKE21 HD (a hydrodynamic model) and MIKE21 SW (a spectral wave model). In our simulation, we used an intense typhoon, named Soudelor, as our base typhoon, which made its landfall on the east coast of Taiwan in the summer of 2015, traveled through the width of the island, and exited the island to the north of Tainan. The reasons we pick this typhoon are that it passed near our objective area, wind field data for this typhoon are available, and we have well documented coastal wave and water level measurements during the passage of Typhoon Soudelor. We firstly used ECMWF (European Centre for Medium-Range Weather Forecasts) wind field data to reconstruct typhoon waves and storm surges for this typhoon by using coupled MIKE21 SW and MIKE21 HD in a regional model. The resultant simulated wave height and sea-level height matched satisfactorily with the measured data. The wave height and storm surge calculated by the regional model provided the boundary conditions for our fine-grid domain. Then different sea-level rises suggested by the IPCC were incorporated into the fine-grid model. Since river discharge due to intense rainfall has also to be considered for coastal flooding, our fine-grid models

  13. Direct Numerical Simulations of Statistically Stationary Turbulent Premixed Flames

    KAUST Repository

    Im, Hong G.; Arias, Paul G.; Chaudhuri, Swetaprovo; Uranakara, Harshavardhana A.

    2016-01-01

    Direct numerical simulations (DNS) of turbulent combustion have evolved tremendously in the past decades, thanks to the rapid advances in high performance computing technology. Today’s DNS is capable of incorporating detailed reaction mechanisms

  14. Statistical interpretation of low energy nuclear level schemes

    Energy Technology Data Exchange (ETDEWEB)

    Egidy, T von; Schmidt, H H; Behkami, A N

    1988-01-01

    Nuclear level schemes and neutron resonance spacings yield information on level densities and level spacing distributions. A total of 75 nuclear level schemes with 1761 levels and known spins and parities was investigated. The A-dependence of level density parameters is discussed. The spacing distributions of levels near the groundstate indicate transitional character between regular and chaotic properties while chaos dominates near the neutron binding energy.

  15. Exploring Factors Related to Completion of an Online Undergraduate-Level Introductory Statistics Course

    Science.gov (United States)

    Zimmerman, Whitney Alicia; Johnson, Glenn

    2017-01-01

    Data were collected from 353 online undergraduate introductory statistics students at the beginning of a semester using the Goals and Outcomes Associated with Learning Statistics (GOALS) instrument and an abbreviated form of the Statistics Anxiety Rating Scale (STARS). Data included a survey of expected grade, expected time commitment, and the…

  16. Statistics of LES simulations of large wind farms

    DEFF Research Database (Denmark)

    Andersen, Søren Juhl; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming

    2016-01-01

    . The statistical moments appear to collapse and hence the turbulence inside large wind farms can potentially be scaled accordingly. The thrust coefficient is estimated by two different reference velocities and the generic CT expression by Frandsen. A reference velocity derived from the power production is shown...... to give very good agreement and furthermore enables the very good estimation of the thrust force using only the steady CT-curve, even for very short time samples. Finally, the effective turbulence inside large wind farms and the equivalent loads are examined....

  17. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    Science.gov (United States)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  18. Statistical characterization of speckle noise in coherent imaging systems

    Science.gov (United States)

    Yaroslavsky, Leonid; Shefler, A.

    2003-05-01

    Speckle noise imposes fundamental limitation on image quality in coherent radiation based imaging and optical metrology systems. Speckle noise phenomena are associated with properties of objects to diffusely scatter irradiation and with the fact that in recording the wave field, a number of signal distortions inevitably occur due to technical limitations inherent to hologram sensors. The statistical theory of speckle noise was developed with regard to only limited resolving power of coherent imaging devices. It is valid only asymptotically as much as the central limit theorem of the probability theory can be applied. In applications this assumption is not always applicable. Moreover, in treating speckle noise problem one should also consider other sources of the hologram deterioration. In the paper, statistical properties of speckle due to the limitation of hologram size, dynamic range and hologram signal quantization are studied by Monte-Carlo simulation for holograms recorded in near and far diffraction zones. The simulation experiments have shown that, for limited resolving power of the imaging system, widely accepted opinion that speckle contrast is equal to one holds only for rather severe level of the hologram size limitation. For moderate limitations, speckle contrast changes gradually from zero for no limitation to one for limitation to less than about 20% of hologram size. The results obtained for the limitation of the hologram sensor"s dynamic range and hologram signal quantization reveal that speckle noise due to these hologram signal distortions is not multiplicative and is directly associated with the severity of the limitation and quantization. On the base of the simulation results, analytical models are suggested.

  19. Isothermal crystallization kinetics in simulated high-level nuclear waste glass

    International Nuclear Information System (INIS)

    Vienna, J.D.; Hrma, P.; Smith, D.E.

    1997-01-01

    Crystallization kinetics of a simulated high-level waste (HLW) glass were measured and modelled. Kinetics of acmite growth in the standard HW39-4 glass were measured using the isothermal method. A time-temperature-transformation (TTT) diagram was generated from these data. Classical glass-crystal transformation kinetic models were empirically applied to the crystallization data. These models adequately describe the kinetics of crystallization in complex HLW glasses (i.e., RSquared = 0.908). An approach to measurement, fitting, and use of TTT diagrams for prediction of crystallinity in a HLW glass canister is proposed

  20. Introducing Statistical Research to Undergraduate Mathematical Statistics Students Using the Guitar Hero Video Game Series

    Science.gov (United States)

    Ramler, Ivan P.; Chapman, Jessica L.

    2011-01-01

    In this article we describe a semester-long project, based on the popular video game series Guitar Hero, designed to introduce upper-level undergraduate statistics students to statistical research. Some of the goals of this project are to help students develop statistical thinking that allows them to approach and answer open-ended research…

  1. Regional model simulations of New Zealand climate

    Science.gov (United States)

    Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.

    1998-03-01

    Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.

  2. A comparison of educational strategies for the acquisition of nursing student’s performance and critical thinking: simulation-based training vs. integrated training (simulation and critical thinking strategies

    Directory of Open Access Journals (Sweden)

    Nahid Zarifsanaiey

    2016-11-01

    Full Text Available Abstract Background There is a need to change the focus of nursing education from traditional teacher-centered training programs to student-centered active methods. The integration of the two active learning techniques will improve the effectiveness of training programs. The objective of this study is to compare the effects of the integrated training (simulation and critical thinking strategies and simulation-based training on the performance level and critical thinking ability of nursing students. Methods The present quasi-experimental study was performed in 2014 on 40 students who were studying practical nursing principles and skills course in the first half of the academic year in Shiraz University of Medical Sciences. Students were randomly divided into control (n = 20 and experimental (n = 20 groups. After training students through simulation and integrated education (simulation and critical thinking strategies, the students' critical thinking ability and performance were evaluated via the use of California Critical Thinking Ability Questionnaire B (CCTST and Objective Structured Clinical Examination (OSCE comprising 10 stations, respectively. The external reliability of the California Critical Thinking questionnaire was reported by Case B.to be between 0.78 and 0.80 and the validity of OSCE was approved by 5 members of the faculty. Furthermore, by using Split Half method (the correlation between odd and even stations, the reliability of the test was approved with correlation coefficient of 0.66. Data were analyzed using t-test and Mann–Whitney test. A significance level of 0.05 was considered to be statistically significant. Results The mean scores of the experimental group performance level were higher than the mean score of the control group performance level. This difference was statistically significant and students in the experimental group in OSCE stations had significantly higher performance than the control group (P <0

  3. Optimization of Operations Resources via Discrete Event Simulation Modeling

    Science.gov (United States)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  4. Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects

    NARCIS (Netherlands)

    Makowski, D.; Asseng, S.; Ewert, F.; Bassu, S.; Durand, J.L.; Martre, P.; Adam, M.; Aggarwal, P.K.; Angulo, C.; Baron, C.; Basso, B.; Bertuzzi, P.; Biernath, C.; Boogaard, H.; Boote, K.J.; Brisson, N.; Cammarano, D.; Challinor, A.J.; Conijn, J.G.; Corbeels, M.; Deryng, D.; Sanctis, De G.; Doltra, J.; Gayler, S.; Goldberg, R.; Grassini, P.; Hatfield, J.L.; Heng, L.; Hoek, S.B.; Hooker, J.; Hunt, L.A.; Ingwersen, J.; Izaurralde, C.; Jongschaap, R.E.E.; Jones, J.W.; Kemanian, R.A.; Kersebaum, K.C.; Kim, S.H.; Lizaso, J.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.J.; Olesen, J.E.; Osborne, T.M.; Palosuo, T.; Pravia, M.V.; Priesack, E.; Ripoche, D.; Rosenzweig, C.; Ruane, A.C.; Sau, F.; Semenov, M.A.; Shcherbak, I.; Steduto, P.; Stöckle, C.O.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Teixeira, E.; Thorburn, P.; Timlin, D.; Travasso, M.; Roetter, R.P.; Waha, K.; Wallach, D.; White, J.W.; Williams, J.R.; Wolf, J.

    2015-01-01

    Many simulation studies have been carried out to predict the effect of climate change on crop yield. Typically, in such study, one or several crop models are used to simulate series of crop yield values for different climate scenarios corresponding to different hypotheses of temperature, CO2

  5. Modern Physics Simulations

    Science.gov (United States)

    Brandt, Douglas; Hiller, John R.; Moloney, Michael J.

    1995-10-01

    The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.

  6. Effects of Using Human Patient Simulator (HPS versus a CD-ROM on Cognition and Critical Thinking

    Directory of Open Access Journals (Sweden)

    Don Johnson, PhD

    2008-01-01

    Full Text Available Background: Very little prospective randomized experimental research exists on the use of simulation as a teaching method, and no studies have compared the two strategies of using the HPS and a CD-ROM. In addition, no researchers have investigated the effects of simulation on various levels of cognition, specifically lower-level and higher-level cognition or critical thinking.Objectives: A prospective pretest-posttest experimental mixed design (within and between was used to determine if there were statistically significant differences in HPS and CD-ROM educational strategies in lower-level, higher-level cognition and critical thinking.Results: A repeated measures multivariate analysis of variance (RMANOVA with LSD post-hoc tests were used to analyze the data. There were no significant differences between the HPS and CD-ROM groups on lower-level cognition scores. The HPS group did significantly better than the CD-ROM group on higher-level cognition and critical thinking scores.Conclusion: This study demonstrated that the choice of teaching strategies for lower-level cognition does not make a statistically significant difference in outcome. However, the HPS is superior to using CD-ROM and should be considered as the choice in teaching.

  7. Statistical analysis and mapping of water levels in the Biscayne aquifer, water conservation areas, and Everglades National Park, Miami-Dade County, Florida, 2000–2009

    Science.gov (United States)

    Prinos, Scott T.; Dixon, Joann F.

    2016-02-25

    Statistical analyses and maps representing mean, high, and low water-level conditions in the surface water and groundwater of Miami-Dade County were made by the U.S. Geological Survey, in cooperation with the Miami-Dade County Department of Regulatory and Economic Resources, to help inform decisions necessary for urban planning and development. Sixteen maps were created that show contours of (1) the mean of daily water levels at each site during October and May for the 2000–2009 water years; (2) the 25th, 50th, and 75th percentiles of the daily water levels at each site during October and May and for all months during 2000–2009; and (3) the differences between mean October and May water levels, as well as the differences in the percentiles of water levels for all months, between 1990–1999 and 2000–2009. The 80th, 90th, and 96th percentiles of the annual maximums of daily groundwater levels during 1974–2009 (a 35-year period) were computed to provide an indication of unusually high groundwater-level conditions. These maps and statistics provide a generalized understanding of the variations of water levels in the aquifer, rather than a survey of concurrent water levels. Water-level measurements from 473 sites in Miami-Dade County and surrounding counties were analyzed to generate statistical analyses. The monitored water levels included surface-water levels in canals and wetland areas and groundwater levels in the Biscayne aquifer.

  8. A thick level set interface model for simulating fatigue-drive delamination in composites

    NARCIS (Netherlands)

    Latifi, M.; Van der Meer, F.P.; Sluys, L.J.

    2015-01-01

    This paper presents a new damage model for simulating fatigue-driven delamination in composite laminates. This model is developed based on the Thick Level Set approach (TLS) and provides a favorable link between damage mechanics and fracture mechanics through the non-local evaluation of the energy

  9. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    Science.gov (United States)

    Žukovič, Milan; Hristopulos, Dionissios T.

    2009-02-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of

  10. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  11. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  12. Statistical Estimation of the Protein-Ligand Binding Free Energy Based On Direct Protein-Ligand Interaction Obtained by Molecular Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Haruki Nakamura

    2012-09-01

    Full Text Available We have developed a method for estimating protein-ligand binding free energy (DG based on the direct protein-ligand interaction obtained by a molecular dynamics simulation. Using this method, we estimated the DG value statistically by the average values of the van der Waals and electrostatic interactions between each amino acid of the target protein and the ligand molecule. In addition, we introduced fluctuations in the accessible surface area (ASA and dihedral angles of the protein-ligand complex system as the entropy terms of the DG estimation. The present method included the fluctuation term of structural change of the protein and the effective dielectric constant. We applied this method to 34 protein-ligand complex structures. As a result, the correlation coefficient between the experimental and calculated DG values was 0.81, and the average error of DG was 1.2 kcal/mol with the use of the fixed parameters. These results were obtained from a 2 nsec molecular dynamics simulation.

  13. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  14. A note on the kappa statistic for clustered dichotomous data.

    Science.gov (United States)

    Zhou, Ming; Yang, Zhao

    2014-06-30

    The kappa statistic is widely used to assess the agreement between two raters. Motivated by a simulation-based cluster bootstrap method to calculate the variance of the kappa statistic for clustered physician-patients dichotomous data, we investigate its special correlation structure and develop a new simple and efficient data generation algorithm. For the clustered physician-patients dichotomous data, based on the delta method and its special covariance structure, we propose a semi-parametric variance estimator for the kappa statistic. An extensive Monte Carlo simulation study is performed to evaluate the performance of the new proposal and five existing methods with respect to the empirical coverage probability, root-mean-square error, and average width of the 95% confidence interval for the kappa statistic. The variance estimator ignoring the dependence within a cluster is generally inappropriate, and the variance estimators from the new proposal, bootstrap-based methods, and the sampling-based delta method perform reasonably well for at least a moderately large number of clusters (e.g., the number of clusters K ⩾50). The new proposal and sampling-based delta method provide convenient tools for efficient computations and non-simulation-based alternatives to the existing bootstrap-based methods. Moreover, the new proposal has acceptable performance even when the number of clusters is as small as K = 25. To illustrate the practical application of all the methods, one psychiatric research data and two simulated clustered physician-patients dichotomous data are analyzed. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Statistics Graduate Teaching Assistants' Beliefs, Practices and Preparation for Teaching Introductory Statistics

    Science.gov (United States)

    Justice, Nicola; Zieffler, Andrew; Garfield, Joan

    2017-01-01

    Graduate teaching assistants (GTAs) are responsible for the instruction of many statistics courses offered at the university level, yet little is known about these students' preparation for teaching, their beliefs about how introductory statistics should be taught, or the pedagogical practices of the courses they teach. An online survey to examine…

  16. Reliability analysis of neutron transport simulation using Monte Carlo method

    International Nuclear Information System (INIS)

    Souza, Bismarck A. de; Borges, Jose C.

    1995-01-01

    This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs

  17. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  18. Simulation of Water Level Fluctuations in a Hydraulic System Using a Coupled Liquid-Gas Model

    Directory of Open Access Journals (Sweden)

    Chao Wang

    2015-08-01

    Full Text Available A model for simulating vertical water level fluctuations with coupled liquid and gas phases is presented. The Preissmann implicit scheme is used to linearize the governing equations for one-dimensional transient flow for both liquid and gas phases, and the linear system is solved using the chasing method. Some classical cases for single liquid and gas phase transients in pipelines and networks are studied to verify that the proposed methods are accurate and reliable. The implicit scheme is extended using a dynamic mesh to simulate the water level fluctuations in a U-tube and an open surge tank without consideration of the gas phase. Methods of coupling liquid and gas phases are presented and used for studying the transient process and interaction between the phases, for gas phase limited in a chamber and gas phase transported in a pipeline. In particular, two other simplified models, one neglecting the effect of the gas phase on the liquid phase and the other one coupling the liquid and gas phases asynchronously, are proposed. The numerical results indicate that the asynchronous model performs better, and are finally applied to a hydropower station with surge tanks and air shafts to simulate the water level fluctuations and air speed.

  19. Spacecraft Data Simulator for the test of level zero processing systems

    Science.gov (United States)

    Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem

    1994-01-01

    The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.

  20. The Canopy Graph and Level Statistics for Random Operators on Trees

    International Nuclear Information System (INIS)

    Aizenman, Michael; Warzel, Simone

    2006-01-01

    For operators with homogeneous disorder, it is generally expected that there is a relation between the spectral characteristics of a random operator in the infinite setup and the distribution of the energy gaps in its finite volume versions, in corresponding energy ranges. Whereas pure point spectrum of the infinite operator goes along with Poisson level statistics, it is expected that purely absolutely continuous spectrum would be associated with gap distributions resembling the corresponding random matrix ensemble. We prove that on regular rooted trees, which exhibit both spectral types, the eigenstate point process has always Poissonian limit. However, we also find that this does not contradict the picture described above if that is carefully interpreted, as the relevant limit of finite trees is not the infinite homogenous tree graph but rather a single-ended 'canopy graph.' For this tree graph, the random Schroedinger operator is proven here to have only pure-point spectrum at any strength of the disorder. For more general single-ended trees it is shown that the spectrum is always singular - pure point possibly with singular continuous component which is proven to occur in some cases

  1. Photon statistics in an N-level (N-1)-mode system

    International Nuclear Information System (INIS)

    Kozierowski, M.; Shumovskij, A.S.

    1987-01-01

    The characteristic and photon number distribution functions, the statistical moments of photon numbers and the correlations of modes are studied. The normally ordered variances of the photon numbers and the cross-correlation functions are calculated

  2. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  3. Three-dimensional electromagnetic strong turbulence. I. Scalings, spectra, and field statistics

    International Nuclear Information System (INIS)

    Graham, D. B.; Robinson, P. A.; Cairns, Iver H.; Skjaeraasen, O.

    2011-01-01

    The first fully three-dimensional (3D) simulations of large-scale electromagnetic strong turbulence (EMST) are performed by numerically solving the electromagnetic Zakharov equations for electron thermal speeds ν e with ν e /c≥0.025. The results of these simulations are presented, focusing on scaling behavior, energy density spectra, and field statistics of the Langmuir (longitudinal) and transverse components of the electric fields during steady-state strong turbulence, where multiple wave packets collapse simultaneously and the system is approximately statistically steady in time. It is shown that for ν e /c > or approx. 0.17 strong turbulence is approximately electrostatic and can be explained using the electrostatic two-component model. For v e /c > or approx. 0.17 the power-law behaviors of the scalings, spectra, and field statistics differ from the electrostatic predictions and results because ν e /c is sufficiently high to allow transverse modes to become trapped in density wells. The results are compared with those of past 3D electrostatic strong turbulence (ESST) simulations and 2D EMST simulations. For number density perturbations, the scaling behavior, spectra, and field statistics are shown to be only weakly dependent on ν e /c, whereas the Langmuir and transverse scalings, spectra, and field statistics are shown to be strongly dependent on ν e /c. Three-dimensional EMST is shown to have features in common with 2D EMST, such as a two-component structure and trapping of transverse modes which are dependent on ν e /c.

  4. Statistical power of model selection strategies for genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Zheyang Wu

    2009-07-01

    Full Text Available Genome-wide association studies (GWAS aim to identify genetic variants related to diseases by examining the associations between phenotypes and hundreds of thousands of genotyped markers. Because many genes are potentially involved in common diseases and a large number of markers are analyzed, it is crucial to devise an effective strategy to identify truly associated variants that have individual and/or interactive effects, while controlling false positives at the desired level. Although a number of model selection methods have been proposed in the literature, including marginal search, exhaustive search, and forward search, their relative performance has only been evaluated through limited simulations due to the lack of an analytical approach to calculating the power of these methods. This article develops a novel statistical approach for power calculation, derives accurate formulas for the power of different model selection strategies, and then uses the formulas to evaluate and compare these strategies in genetic model spaces. In contrast to previous studies, our theoretical framework allows for random genotypes, correlations among test statistics, and a false-positive control based on GWAS practice. After the accuracy of our analytical results is validated through simulations, they are utilized to systematically evaluate and compare the performance of these strategies in a wide class of genetic models. For a specific genetic model, our results clearly reveal how different factors, such as effect size, allele frequency, and interaction, jointly affect the statistical power of each strategy. An example is provided for the application of our approach to empirical research. The statistical approach used in our derivations is general and can be employed to address the model selection problems in other random predictor settings. We have developed an R package markerSearchPower to implement our formulas, which can be downloaded from the

  5. Fundamentals of classical statistical thermodynamics dissipation, relaxation, and fluctuation theorems

    CERN Document Server

    Evans, Denis James; Williams, Stephen Rodney

    2016-01-01

    Both a comprehensive overview and a treatment at the appropriate level of detail, this textbook explains thermodynamics and generalizes the subject so it can be applied to small nano- or biosystems, arbitrarily far from or close to equilibrium. In addition, nonequilibrium free energy theorems are covered with a rigorous exposition of each one. Throughout, the authors stress the physical concepts along with the mathematical derivations. For researchers and students in physics, chemistry, materials science and molecular biology, this is a useful text for postgraduate courses in statistical mechanics, thermodynamics and molecular simulations, while equally serving as a reference for university teachers and researchers in these fields.

  6. Thermal infrared imaging of the variability of canopy-air temperature difference distribution for heavy metal stress levels discrimination in rice

    Science.gov (United States)

    Zhang, Biyao; Liu, Xiangnan; Liu, Meiling; Wang, Dongmin

    2017-04-01

    This paper addresses the assessment and interpretation of the canopy-air temperature difference (Tc-Ta) distribution as an indicator for discriminating between heavy metal stress levels. Tc-Ta distribution is simulated by coupling the energy balance equation with modified leaf angle distribution. Statistical indices including average value (AVG), standard deviation (SD), median, and span of Tc-Ta in the field of view of a digital thermal imager are calculated to describe Tc-Ta distribution quantitatively and, consequently, became the stress indicators. In the application, two grains of rice growing sites under "mild" and "severe" stress level were selected as study areas. A total of 96 thermal images obtained from the field measurements in the three growth stages were used for a separate application of a theoretical variation of Tc-Ta distribution. The results demonstrated that the statistical indices calculated from both simulated and measured data exhibited an upward trend as the stress level becomes serious because heavy metal stress would only raise a portion of the leaves in the canopy. Meteorological factors could barely affect the sensitivity of the statistical indices with the exception of the wind speed. Among the statistical indices, AVG and SD were demonstrated to be better indicators for stress levels discrimination.

  7. SoS contract verification using statistical model checking

    Directory of Open Access Journals (Sweden)

    Alessandro Mignogna

    2013-11-01

    Full Text Available Exhaustive formal verification for systems of systems (SoS is impractical and cannot be applied on a large scale. In this paper we propose to use statistical model checking for efficient verification of SoS. We address three relevant aspects for systems of systems: 1 the model of the SoS, which includes stochastic aspects; 2 the formalization of the SoS requirements in the form of contracts; 3 the tool-chain to support statistical model checking for SoS. We adapt the SMC technique for application to heterogeneous SoS. We extend the UPDM/SysML specification language to express the SoS requirements that the implemented strategies over the SoS must satisfy. The requirements are specified with a new contract language specifically designed for SoS, targeting a high-level English- pattern language, but relying on an accurate semantics given by the standard temporal logics. The contracts are verified against the UPDM/SysML specification using the Statistical Model Checker (SMC PLASMA combined with the simulation engine DESYRE, which integrates heterogeneous behavioral models through the functional mock-up interface (FMI standard. The tool-chain allows computing an estimation of the satisfiability of the contracts by the SoS. The results help the system architect to trade-off different solutions to guide the evolution of the SoS.

  8. A One System Integrated Approach to Simulant Selection for Hanford High Level Waste Mixing and Sampling Tests

    International Nuclear Information System (INIS)

    Thien, Mike G.; Barnes, Steve M.

    2013-01-01

    The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capabilities using simulated Hanford High-Level Waste (HLW) formulations. This represents one of the largest remaining technical issues with the high-level waste treatment mission at Hanford. Previous testing has focused on very specific TOC or WTP test objectives and consequently the simulants were narrowly focused on those test needs. A key attribute in the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2010-2 is to ensure testing is performed with a simulant that represents the broad spectrum of Hanford waste. The One System Integrated Project Team is a new joint TOC and WTP organization intended to ensure technical integration of specific TOC and WTP systems and testing. A new approach to simulant definition has been mutually developed that will meet both TOC and WTP test objectives for the delivery and receipt of HLW. The process used to identify critical simulant characteristics, incorporate lessons learned from previous testing, and identify specific simulant targets that ensure TOC and WTP testing addresses the broad spectrum of Hanford waste characteristics that are important to mixing, sampling, and transfer performance are described

  9. Statistical theory of dynamo

    Science.gov (United States)

    Kim, E.; Newton, A. P.

    2012-04-01

    One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot

  10. The influence of design characteristics on statistical inference in nonlinear estimation: A simulation study based on survival data and hazard modeling

    DEFF Research Database (Denmark)

    Andersen, J.S.; Bedaux, J.J.M.; Kooijman, S.A.L.M.

    2000-01-01

    This paper describes the influence of design characteristics on the statistical inference for an ecotoxicological hazard-based model using simulated survival data. The design characteristics of interest are the number and spacing of observations (counts) in time, the number and spacing of exposure...... concentrations (within c(min) and c(max)), and the initial number of individuals at time 0 in each concentration. A comparison of the coverage probabilities for confidence limits arising from the profile-likelihood approach and the Wald-based approach is carried out. The Wald-based approach is very sensitive...

  11. Discrepancy between simulated and observed ethane and propane levels explained by underestimated fossil emissions

    Science.gov (United States)

    Dalsøren, Stig B.; Myhre, Gunnar; Hodnebrog, Øivind; Myhre, Cathrine Lund; Stohl, Andreas; Pisso, Ignacio; Schwietzke, Stefan; Höglund-Isaksson, Lena; Helmig, Detlev; Reimann, Stefan; Sauvage, Stéphane; Schmidbauer, Norbert; Read, Katie A.; Carpenter, Lucy J.; Lewis, Alastair C.; Punjabi, Shalini; Wallasch, Markus

    2018-03-01

    Ethane and propane are the most abundant non-methane hydrocarbons in the atmosphere. However, their emissions, atmospheric distribution, and trends in their atmospheric concentrations are insufficiently understood. Atmospheric model simulations using standard community emission inventories do not reproduce available measurements in the Northern Hemisphere. Here, we show that observations of pre-industrial and present-day ethane and propane can be reproduced in simulations with a detailed atmospheric chemistry transport model, provided that natural geologic emissions are taken into account and anthropogenic fossil fuel emissions are assumed to be two to three times higher than is indicated in current inventories. Accounting for these enhanced ethane and propane emissions results in simulated surface ozone concentrations that are 5-13% higher than previously assumed in some polluted regions in Asia. The improved correspondence with observed ethane and propane in model simulations with greater emissions suggests that the level of fossil (geologic + fossil fuel) methane emissions in current inventories may need re-evaluation.

  12. PHYSICS OF NON-GAUSSIAN FIELDS AND THE COSMOLOGICAL GENUS STATISTIC

    International Nuclear Information System (INIS)

    James, J. Berian

    2012-01-01

    We report a technique to calculate the impact of distinct physical processes inducing non-Gaussianity on the cosmological density field. A natural decomposition of the cosmic genus statistic into an orthogonal polynomial sequence allows complete expression of the scale-dependent evolution of the topology of large-scale structure, in which effects including galaxy bias, nonlinear gravitational evolution, and primordial non-Gaussianity may be delineated. The relationship of this decomposition to previous methods for analyzing the genus statistic is briefly considered and the following applications are made: (1) the expression of certain systematics affecting topological measurements, (2) the quantification of broad deformations from Gaussianity that appear in the genus statistic as measured in the Horizon Run simulation, and (3) the study of the evolution of the genus curve for simulations with primordial non-Gaussianity. These advances improve the treatment of flux-limited galaxy catalogs for use with this measurement and further the use of the genus statistic as a tool for exploring non-Gaussianity.

  13. Large-eddy simulation of maritime deep tropical convection

    Directory of Open Access Journals (Sweden)

    Peter A Bogenschutz

    2009-12-01

    Full Text Available This study represents an attempt to apply Large-Eddy Simulation (LES resolution to simulate deep tropical convection in near equilibrium for 24 hours over an area of about 205 x 205 km2, which is comparable to that of a typical horizontal grid cell in a global climate model. The simulation is driven by large-scale thermodynamic tendencies derived from mean conditions during the GATE Phase III field experiment. The LES uses 2048 x 2048 x 256 grid points with horizontal grid spacing of 100 m and vertical grid spacing ranging from 50 m in the boundary layer to 100 m in the free troposphere. The simulation reaches a near equilibrium deep convection regime in 12 hours. The simulated vertical cloud distribution exhibits a trimodal vertical distribution of deep, middle and shallow clouds similar to that often observed in Tropics. A sensitivity experiment in which cold pools are suppressed by switching off the evaporation of precipitation results in much lower amounts of shallow and congestus clouds. Unlike the benchmark LES where the new deep clouds tend to appear along the edges of spreading cold pools, the deep clouds in the no-cold-pool experiment tend to reappear at the sites of the previous deep clouds and tend to be surrounded by extensive areas of sporadic shallow clouds. The vertical velocity statistics of updraft and downdraft cores below 6 km height are compared to aircraft observations made during GATE. The comparison shows generally good agreement, and strongly suggests that the LES simulation can be used as a benchmark to represent the dynamics of tropical deep convection on scales ranging from large turbulent eddies to mesoscale convective systems. The effect of horizontal grid resolution is examined by running the same case with progressively larger grid sizes of 200, 400, 800, and 1600 m. These runs show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition

  14. ITSA: Internet Traffic Statistics Archive

    NARCIS (Netherlands)

    Hoogesteger, Martijn; de Oliveira Schmidt, R.; Pras, Aiko

    Motivated by the fact that comprehensive and long term Internet traffic measurements can be hard to obtain, we have proposed and developed the Internet Traffic Statistics Archive (ITSA). Since 2013, ITSA concentrates reports on high-level statistics of Internet traffic worldwide. Examples of

  15. Modelling thermionic emission by using a two-level mechanical system

    International Nuclear Information System (INIS)

    Battaglia, O.R.

    2008-01-01

    The Boltzmann factor is at the basis of a great amount of thermodynamic and statistical physics, both classical and quantum. It describes the behaviour of natural systems that exchange energy with the environment. However, why does the expression have that specific form? The Feynman Lectures on Physics justifies it heuristically by referencing to the exponential atmosphere example. Thermodynamics textbooks usually give a more or less complete explanation that mainly involves a mathematical analysis, where it is hard to see the logic flow. Moreover, the necessary mathematics is not at the level of high school or college students' preparation. Here we present an experiment and a simulation at deriving the Boltzmann factor expression and illustrating the fundamental concepts and principles of statistical mechanics. Experiments and simulations are used in order to visualise the mechanism involved; the experiments use easily available laboratory equipment, and simulations are developed in Net Logo, a software environment that, besides having a really friendly interface, allows the user to easily interact with the algorithm, as well as to modify it.

  16. Statistics for Locally Scaled Point Patterns

    DEFF Research Database (Denmark)

    Prokesová, Michaela; Hahn, Ute; Vedel Jensen, Eva B.

    2006-01-01

    scale factor. The main emphasis of the present paper is on analysis of such models. Statistical methods are developed for estimation of scaling function and template parameters as well as for model validation. The proposed methods are assessed by simulation and used in the analysis of a vegetation...

  17. Bio-imaging and visualization for patient-customized simulations

    CERN Document Server

    Luo, Xiongbiao; Li, Shuo

    2014-01-01

    This book contains the full papers presented at the MICCAI 2013 workshop Bio-Imaging and Visualization for Patient-Customized Simulations (MWBIVPCS 2013). MWBIVPCS 2013 brought together researchers representing several fields, such as Biomechanics, Engineering, Medicine, Mathematics, Physics and Statistic. The contributions included in this book present and discuss new trends in those fields, using several methods and techniques, including the finite element method, similarity metrics, optimization processes, graphs, hidden Markov models, sensor calibration, fuzzy logic, data mining, cellular automation, active shape models, template matching and level sets. These serve as tools to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modelling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis.  This boo...

  18. A Role for Chunk Formation in Statistical Learning of Second Language Syntax

    Science.gov (United States)

    Hamrick, Phillip

    2014-01-01

    Humans are remarkably sensitive to the statistical structure of language. However, different mechanisms have been proposed to account for such statistical sensitivities. The present study compared adult learning of syntax and the ability of two models of statistical learning to simulate human performance: Simple Recurrent Networks, which learn by…

  19. Phase flow and statistical structure of Galton-board systems

    International Nuclear Information System (INIS)

    Lue, A.; Brenner, H.

    1993-01-01

    Galton boards, found in museum exhibits devoted to science and technology, are often used to demonstrate visually the ubiquity of so-called ''laws of probability'' via an experimental realization of normal distributions. A detailed theoretical study of Galton-board phase-space dynamics and statistical behavior is presented. The study is based on a simple inelastic-collision model employing a particle fall- ing through a spatially periodic lattice of rigid, convex scatterers. We show that such systems exhibit indeterminate behavior through the presence of strange attractors or strange repellers in phase space; nevertheless, we also show that these systems exhibit regular and predictable behavior under specific circumstances. Phase-space strange attractors, periodic attractors, and strange repellers are present in numerical simulations, confirming results anticipated from geometric analysis. The system's geometry (dictated by lattice geometry and density as well as the direction of gravity) is observed to play a dominant role in stability, phase-flow topology, and statistical observations. Smale horseshoes appear to exist in the low-lattice-density limit and may exist in other regimes. These horseshoes are generated by homoclinic orbits whose existence is dictated by system characteristics. The horseshoes lead directly to deterministic chaos in the system. Strong evidence exists for ergodicity in all attractors. Phase-space complexities are manifested at all observed levels, particularly statistical ones. Consequently, statistical observations are critically dependent upon system details. Under well-defined circumstances, these observations display behavior which does not constitute a realization of the ''laws of probability.''

  20. Statistical properties of superimposed stationary spike trains.

    Science.gov (United States)

    Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan

    2012-06-01

    The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.