Directory of Open Access Journals (Sweden)
Nelson Hauck Filho
2014-12-01
Full Text Available Researchers dealing with the task of estimating locations of individuals on continuous latent variables may rely on several statistical models described in the literature. However, weighting costs and benefits of using one specific model over alternative models depends on empirical information that is not always clearly available. Therefore, the aim of this simulation study was to compare the performance of seven popular statistical models in providing adequate latent trait estimates in conditions of items difficulties targeted at the sample mean or at the tails of the latent trait distribution. Results suggested an overall tendency of models to provide more accurate estimates of true latent scores when using items targeted at the sample mean of the latent trait distribution. Rating Scale Model, Graded Response Model, and Weighted Least Squares Mean- and Variance-adjusted Confirmatory Factor Analysis yielded the most reliable latent trait estimates, even when applied to inadequate items for the sample distribution of the latent variable. These findings have important implications concerning some popular methodological practices in Psychology and related areas.
Simulating metabolism with statistical thermodynamics.
Cannon, William R
2014-01-01
New methods are needed for large scale modeling of metabolism that predict metabolite levels and characterize the thermodynamics of individual reactions and pathways. Current approaches use either kinetic simulations, which are difficult to extend to large networks of reactions because of the need for rate constants, or flux-based methods, which have a large number of feasible solutions because they are unconstrained by the law of mass action. This report presents an alternative modeling approach based on statistical thermodynamics. The principles of this approach are demonstrated using a simple set of coupled reactions, and then the system is characterized with respect to the changes in energy, entropy, free energy, and entropy production. Finally, the physical and biochemical insights that this approach can provide for metabolism are demonstrated by application to the tricarboxylic acid (TCA) cycle of Escherichia coli. The reaction and pathway thermodynamics are evaluated and predictions are made regarding changes in concentration of TCA cycle intermediates due to 10- and 100-fold changes in the ratio of NAD+:NADH concentrations. Finally, the assumptions and caveats regarding the use of statistical thermodynamics to model non-equilibrium reactions are discussed.
Quantum level statistics of pseudointegrable billiards
International Nuclear Information System (INIS)
Cheon, T.; Cohen, T.D.
1989-01-01
We study the spectral statistics of systems of two-dimensional pseudointegrable billiards. These systems are classically nonergodic, but nonseparable. It is found that such systems possess quantum spectra which are closely simulated by the Gaussian orthogonal ensemble. We discuss the implications of these results on the conjectured relation between classical chaos and quantum level statistics. We emphasize the importance of the semiclassical nature of any such relation
Managing Macroeconomic Risks by Using Statistical Simulation
Directory of Open Access Journals (Sweden)
Merkaš Zvonko
2017-06-01
Full Text Available The paper analyzes the possibilities of using statistical simulation in the macroeconomic risks measurement. At the level of the whole world, macroeconomic risks are, due to the excessive imbalance, significantly increased. Using analytical statistical methods and Monte Carlo simulation, the authors interpret the collected data sets, compare and analyze them in order to mitigate potential risks. The empirical part of the study is a qualitative case study that uses statistical methods and Monte Carlo simulation for managing macroeconomic risks, which is the central theme of this work. Application of statistical simulation is necessary because the system, for which it is necessary to specify the model, is too complex for an analytical approach. The objective of the paper is to point out the previous need for consideration of significant macroeconomic risks, particularly in terms of the number of the unemployed in the society, the movement of gross domestic product and the country’s credit rating, and the use of data previously processed by statistical methods, through statistical simulation, to analyze the existing model of managing the macroeconomic risks and suggest elements for a management model development that will allow, with the lowest possible probability and consequences, the emergence of the recent macroeconomic risks. The stochastic characteristics of the system, defined by random variables as input values defined by probability distributions, require the performance of a large number of iterations on which to record the output of the model and calculate the mathematical expectations. The paper expounds the basic procedures and techniques of discrete statistical simulation applied to systems that can be characterized by a number of events which represent a set of circumstances that have caused a change in the system’s state and the possibility of its application in the field of assessment of macroeconomic risks. The method has no
Statistical Literacy: Simulations with Dolphins
Strayer, Jeremy; Matuszewski, Amber
2016-01-01
In this article, Strayer and Matuszewski present a six-phase strategy that teachers can use to help students develop a conceptual understanding of inferential hypothesis testing through simulation. As Strayer and Matuszewski discuss the strategy, they describe each phase in general, explain how they implemented the phase while teaching their…
7th International Workshop on Statistical Simulation
Mignani, Stefania; Monari, Paola; Salmaso, Luigi
2014-01-01
The Department of Statistical Sciences of the University of Bologna in collaboration with the Department of Management and Engineering of the University of Padova, the Department of Statistical Modelling of Saint Petersburg State University, and INFORMS Simulation Society sponsored the Seventh Workshop on Simulation. This international conference was devoted to statistical techniques in stochastic simulation, data collection, analysis of scientific experiments, and studies representing broad areas of interest. The previous workshops took place in St. Petersburg, Russia in 1994, 1996, 1998, 2001, 2005, and 2009. The Seventh Workshop took place in the Rimini Campus of the University of Bologna, which is in Rimini’s historical center.
Topics in computer simulations of statistical systems
International Nuclear Information System (INIS)
Salvador, R.S.
1987-01-01
Several computer simulations studying a variety of topics in statistical mechanics and lattice gauge theories are performed. The first study describes a Monte Carlo simulation performed on Ising systems defined on Sierpinsky carpets of dimensions between one and four. The critical coupling and the exponent γ are measured as a function of dimension. The Ising gauge theory in d = 4 - epsilon, for epsilon → 0 + , is then studied by performing a Monte Carlo simulation for the theory defined on fractals. A high statistics Monte Carlo simulation for the three-dimensional Ising model is presented for lattices of sizes 8 3 to 44 3 . All the data obtained agrees completely, within statistical errors, with the forms predicted by finite-sizing scaling. Finally, a method to estimate numerically the partition function of statistical systems is developed
Statistical Emulator for Expensive Classification Simulators
Ross, Jerret; Samareh, Jamshid A.
2016-01-01
Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.
Shnirelman peak in the level spacing statistics
International Nuclear Information System (INIS)
Chirikov, B.V.; Shepelyanskij, D.L.
1994-01-01
The first results on the statistical properties of the quantum quasidegeneracy are presented. A physical interpretation of the Shnirelman theorem predicted the bulk quasidegeneracy is given. The conditions for the strong impact of the degeneracy on the quantum level statistics are formulated which allows to extend the application of the Shnirelman theorem into a broad class of quantum systems. 14 refs., 3 figs
Electron Energy Level Statistics in Graphene Quantum Dots
De Raedt, H.; Katsnellson, M. I.; Katsnelson, M.I.
2008-01-01
Motivated by recent experimental observations of size quantization of electron energy levels in graphene quantum dots [7] we investigate the level statistics in the simplest tight-binding model for different dot shapes by computer simulation. The results are in a reasonable agreement with the
Statistics of high-level scene context.
Greene, Michelle R
2013-01-01
CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics
Significance levels for studies with correlated test statistics.
Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S
2008-07-01
When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.
Teaching Statistical Principles with a Roulette Simulation
Directory of Open Access Journals (Sweden)
Graham D Barr
2013-03-01
Full Text Available This paper uses the game of roulette in a simulation setting to teach students in an introductory Stats course some basic issues in theoretical and empirical probability. Using an Excel spreadsheet with embedded VBA (Visual Basic for Applications, one can simulate the empirical return and empirical standard deviation for a range of bets in Roulette over some predetermined number of plays. In particular, the paper illustrates the difference between different playing strategies by contrasting a low payout bet (say a bet on “red” and a high payout bet (say a bet on a particular number by considering the expected return and volatility associated with the bets. The paper includes an Excel VBA based simulation of the Roulette wheel where students can make bets and monitor the return on the bets for one play or multiple plays. In addition it includes a simulation of the casino house advantage for repeated multiple plays; that is, it allows students to see how casinos may derive a new certain return equal to the house advantage by entertaining large numbers of bets which will systematically drive the volatility of the house advantage down to zero. This simulation has been shown to be especially effective at theUniversityofCape Townfor teaching first year Statistics students the subtler points of probability, as well as encouraging discussions around the risk-return trade-off facing gamblers. The program has also been shown to be useful for teaching students the principles of theoretical and empirical probabilities as well as an understanding of volatility.
Atomic-level computer simulation
International Nuclear Information System (INIS)
Adams, J.B.; Rockett, Angus; Kieffer, John; Xu Wei; Nomura, Miki; Kilian, K.A.; Richards, D.F.; Ramprasad, R.
1994-01-01
This paper provides a broad overview of the methods of atomic-level computer simulation. It discusses methods of modelling atomic bonding, and computer simulation methods such as energy minimization, molecular dynamics, Monte Carlo, and lattice Monte Carlo. ((orig.))
Hidden Statistics Approach to Quantum Simulations
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the
Temporal scaling and spatial statistical analyses of groundwater level fluctuations
Sun, H.; Yuan, L., Sr.; Zhang, Y.
2017-12-01
Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.
Level and width statistics for a decaying chaotic system
International Nuclear Information System (INIS)
Mizutori, S.; Zelevinsky, V.G.
1993-01-01
The random matrix ensemble of discretized effective non-hermitian hamiltonians is used for studying local correlations and fluctuations of energies and widths in a quantum system where intrinsic levels are coupled to the continuum via a common decay channel. With the use of analytical estimates and numerical simulations, generic properties of statistical observables are obtained for the regimes of weak and strong continuum coupling as well as for the transitional region. Typical signals of the transition (width collectivization, disappearance of level repulsion at small spacings and violation of uniformity along the energy axis) are discussed quantitatively. (orig.)
International Nuclear Information System (INIS)
Nemnes, G A; Anghel, D V
2010-01-01
We present a stochastic method for the simulation of the time evolution in systems which obey generalized statistics, namely fractional exclusion statistics and Gentile's statistics. The transition rates are derived in the framework of canonical ensembles. This approach introduces a tool for describing interacting fermionic and bosonic systems in non-equilibrium as ideal FES systems, in a computationally efficient manner. The two types of statistics are analyzed comparatively, indicating their intrinsic thermodynamic differences and revealing key aspects related to the species size
Noise level and MPEG-2 encoder statistics
Lee, Jungwoo
1997-01-01
Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.
Energy-level statistics and time relaxation in quantum systems
International Nuclear Information System (INIS)
Gruver, J.L.; Cerdeira, H.A.; Aliaga, J.; Mello, P.A.; Proto, A.N.
1997-05-01
We study a quantum-mechanical system, prepared, at t = 0, in a model state, that subsequently decays into a sea of other states whose energy levels form a discrete spectrum with given statistical properties. An important quantity is the survival probability P(t), defined as the probability, at time t, to find the system in the original model state. Our main purpose is to analyze the influence of the discreteness and statistical properties of the spectrum on the behavior of P(t). Since P(t) itself is a statistical quantity, we restrict our attention to its ensemble average , which is calculated analytically using random-matrix techniques, within certain approximations discussed in the text. We find, for , an exponential decay, followed by a revival, governed by the two-point structure of the statistical spectrum, thus giving a nonzero asymptotic value for large t's. The analytic result compares well with a number of computer simulations, over a time range discussed in the text. (author). 17 refs, 1 fig
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
Directory of Open Access Journals (Sweden)
P. Moreno Quintana
2002-05-01
Full Text Available La introducción de los simuladores en el proceso de instrucción en el país cuenta con una dificultad al no conocerse el realimpacto de este tipo de equipamiento en la adquisición de las habilidades dentro del proceso de entrenamiento a que vadirigido.El empleo del método estadístico factorial a dos niveles permite la obtención de un modelo lineal de respuesta de laeficiencia, o calificación, en función de la forma cuantitativa de empleo de los diversos medios de entrenamiento y suscombinaciones.Este modelo es validado con un nivel de confianza calculado y puede ser optimizado por los métodos matemáticoscorrespondientes. Para esto se realiza un grupo de recomendaciones en la organización de los experimentos que han sidoobtenidos durante la aplicación de este método en diversas ocasiones.Palabras claves: Simuladores, modelación matemática, diseño de experimentos._____________________________________________________________________________Abstract.The introduction of simulators in the country´s instruction process deals with the difficulty of not knowing the real impactof this equipment in the acquisition of abilities within the training process. The use of the factorial statistical method at twolevels allows the obtaining of a linear model with answer about efficiency or qualification based on the quantitative form ofuse of diverse means of training and its combinations. This model is validated with a calculated level of confidence and canbe optimized by the corresponding mathematical methods. For this a group of recommendations is made in the organizationof experiments that have been obtained during the application of this method in diverse combnations.Key words: Simulators, mathematical modelation, experiment design.
A Simulational approach to teaching statistical mechanics and kinetic theory
International Nuclear Information System (INIS)
Karabulut, H.
2005-01-01
A computer simulation demonstrating how Maxwell-Boltzmann distribution is reached in gases from a nonequilibrium distribution is presented. The algorithm can be generalized to the cases of gas particles (atoms or molecules) with internal degrees of freedom such as electronic excitations and vibrational-rotational energy levels. Another generalization of the algorithm is the case of mixture of two different gases. By choosing the collision cross sections properly one can create quasi equilibrium distributions. For example by choosing same atom cross sections large and different atom cross sections very small one can create mixture of two gases with different temperatures where two gases slowly interact and come to equilibrium in a long time. Similarly, for the case one kind of atom with internal degrees of freedom one can create situations that internal degrees of freedom come to the equilibrium much later than translational degrees of freedom. In all these cases the equilibrium distribution that the algorithm gives is the same as expected from the statistical mechanics. The algorithm can also be extended to cover the case of chemical equilibrium where species A and B react to form AB molecules. The laws of chemical equilibrium can be observed from this simulation. The chemical equilibrium simulation can also help to teach the elusive concept of chemical potential
Visualizing and Understanding Probability and Statistics: Graphical Simulations Using Excel
Gordon, Sheldon P.; Gordon, Florence S.
2009-01-01
The authors describe a collection of dynamic interactive simulations for teaching and learning most of the important ideas and techniques of introductory statistics and probability. The modules cover such topics as randomness, simulations of probability experiments such as coin flipping, dice rolling and general binomial experiments, a simulation…
Statistical inference of level densities from resolved resonance parameters
International Nuclear Information System (INIS)
Froehner, F.H.
1983-08-01
Level densities are most directly obtained by counting the resonances observed in the resolved resonance range. Even in the measurements, however, weak levels are invariably missed so that one has to estimate their number and add it to the raw count. The main categories of missinglevel estimators are discussed in the present review, viz. (I) ladder methods including those based on the theory of Hamiltonian matrix ensembles (Dyson-Mehta statistics), (II) methods based on comparison with artificial cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (III) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The language of mathematical statistics is employed to clarify the basis of, and the relationship between, the various techniques. Recent progress in the treatment of resolution effects, detection thresholds and p-wave admixture is described. (orig.) [de
A statistical-dynamical downscaling procedure for global climate simulations
International Nuclear Information System (INIS)
Frey-Buness, A.; Heimann, D.; Sausen, R.; Schumann, U.
1994-01-01
A statistical-dynamical downscaling procedure for global climate simulations is described. The procedure is based on the assumption that any regional climate is associated with a specific frequency distribution of classified large-scale weather situations. The frequency distributions are derived from multi-year episodes of low resolution global climate simulations. Highly resolved regional distributions of wind and temperature are calculated with a regional model for each class of large-scale weather situation. They are statistically evaluated by weighting them with the according climate-specific frequency. The procedure is exemplarily applied to the Alpine region for a global climate simulation of the present climate. (orig.)
An introduction to statistical computing a simulation-based approach
Voss, Jochen
2014-01-01
A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems. Sampling-based simulation techniques are now an invaluable tool for exploring statistical models. This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods. It also includes some advanced met
The semi-empirical low-level background statistics
International Nuclear Information System (INIS)
Tran Manh Toan; Nguyen Trieu Tu
1992-01-01
A semi-empirical low-level background statistics was proposed. The one can be applied to evaluated the sensitivity of low background systems, and to analyse the statistical error, the 'Rejection' and 'Accordance' criteria for processing of low-level experimental data. (author). 5 refs, 1 figs
Information Geometry, Inference Methods and Chaotic Energy Levels Statistics
Cafaro, Carlo
2008-01-01
In this Letter, we propose a novel information-geometric characterization of chaotic (integrable) energy level statistics of a quantum antiferromagnetic Ising spin chain in a tilted (transverse) external magnetic field. Finally, we conjecture our results might find some potential physical applications in quantum energy level statistics.
Cognitive Transfer Outcomes for a Simulation-Based Introductory Statistics Curriculum
Backman, Matthew D.; Delmas, Robert C.; Garfield, Joan
2017-01-01
Cognitive transfer is the ability to apply learned skills and knowledge to new applications and contexts. This investigation evaluates cognitive transfer outcomes for a tertiary-level introductory statistics course using the CATALST curriculum, which exclusively used simulation-based methods to develop foundations of statistical inference. A…
Statistical 3D damage accumulation model for ion implant simulators
Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M
2003-01-01
A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.
Statistical 3D damage accumulation model for ion implant simulators
International Nuclear Information System (INIS)
Hernandez-Mangas, J.M.; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M.
2003-01-01
A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided
Simulation Experiments in Practice: Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...
Monte Carlo Simulation for Statistical Decay of Compound Nucleus
Directory of Open Access Journals (Sweden)
Chadwick M.B.
2012-02-01
Full Text Available We perform Monte Carlo simulations for neutron and γ-ray emissions from a compound nucleus based on the Hauser-Feshbach statistical theory. This Monte Carlo Hauser-Feshbach (MCHF method calculation, which gives us correlated information between emitted particles and γ-rays. It will be a powerful tool in many applications, as nuclear reactions can be probed in a more microscopic way. We have been developing the MCHF code, CGM, which solves the Hauser-Feshbach theory with the Monte Carlo method. The code includes all the standard models that used in a standard Hauser-Feshbach code, namely the particle transmission generator, the level density module, interface to the discrete level database, and so on. CGM can emit multiple neutrons, as long as the excitation energy of the compound nucleus is larger than the neutron separation energy. The γ-ray competition is always included at each compound decay stage, and the angular momentum and parity are conserved. Some calculations for a fission fragment 140Xe are shown as examples of the MCHF method, and the correlation between the neutron and γ-ray is discussed.
Parametric Level Statistics in Random Matrix Theory: Exact Solution
International Nuclear Information System (INIS)
Kanzieper, E.
1999-01-01
During recent several years, the theory of non-Gaussian random matrix ensembles has experienced a sound progress motivated by new ideas in quantum chromodynamics (QCD) and mesoscopic physics. Invariant non-Gaussian random matrix models appear to describe universal features of low-energy part of the spectrum of Dirac operator in QCD, and electron level statistics in normal conducting-superconducting hybrid structures. They also serve as a basis for constructing the toy models of universal spectral statistics expected at the edge of the metal-insulator transition. While conventional spectral statistics has received a detailed study in the context of RMT, quite a bit is known about parametric level statistics in non-Gaussian random matrix models. In this communication we report about exact solution to the problem of parametric level statistics in unitary invariant, U(N), non-Gaussian ensembles of N x N Hermitian random matrices with either soft or strong level confinement. The solution is formulated within the framework of the orthogonal polynomial technique and is shown to depend on both the unfolded two-point scalar kernel and the level confinement through a double integral transformation which, in turn, provides a constructive tool for description of parametric level correlations in non-Gaussian RMT. In the case of soft level confinement, the formalism developed is potentially applicable to a study of parametric level statistics in an important class of random matrix models with finite level compressibility expected to describe a disorder-induced metal-insulator transition. In random matrix ensembles with strong level confinement, the solution presented takes a particular simple form in the thermodynamic limit: In this case, a new intriguing connection relation between the parametric level statistics and the scalar two-point kernel of an unperturbed ensemble is demonstrated to emerge. Extension of the results obtained to higher-order parametric level statistics is
Simulation Experiments in Practice : Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic
Simulation Experiments in Practice : Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is
Monte Carlo simulation in statistical physics an introduction
Binder, Kurt
1992-01-01
The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations
Technology for enhancing statistical reasoning at the school level
Biehler, R.; Ben-Zvi, D.; Bakker, A.|info:eu-repo/dai/nl/272605778; Makar, K.
2013-01-01
The purpose of this chapter is to provide an updated overview of digital technologies relevant to statistics education, and to summarize what is currently known about how these new technologies can support the development of students’ statistical reasoning at the school level. A brief literature
Multiple point statistical simulation using uncertain (soft) conditional data
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
Parametric Statistics of Individual Energy Levels in Random Hamiltonians
Smolyarenko, I. E.; Simons, B. D.
2002-01-01
We establish a general framework to explore parametric statistics of individual energy levels in disordered and chaotic quantum systems of unitary symmetry. The method is applied to the calculation of the universal intra-level parametric velocity correlation function and the distribution of level shifts under the influence of an arbitrary external perturbation.
International Nuclear Information System (INIS)
He Xin; Links, Jonathan M; Frey, Eric C
2010-01-01
Quantum noise as well as anatomic and uptake variability in patient populations limits observer performance on a defect detection task in myocardial perfusion SPECT (MPS). The goal of this study was to investigate the relative importance of these two effects by varying acquisition time, which determines the count level, and assessing the change in performance on a myocardial perfusion (MP) defect detection task using both mathematical and human observers. We generated ten sets of projections of a simulated patient population with count levels ranging from 1/128 to around 15 times a typical clinical count level to simulate different levels of quantum noise. For the simulated population we modeled variations in patient, heart and defect size, heart orientation and shape, defect location, organ uptake ratio, etc. The projection data were reconstructed using the OS-EM algorithm with no compensation or with attenuation, detector response and scatter compensation (ADS). The images were then post-filtered and reoriented to generate short-axis slices. A channelized Hotelling observer (CHO) was applied to the short-axis images, and the area under the receiver operating characteristics (ROC) curve (AUC) was computed. For each noise level and reconstruction method, we optimized the number of iterations and cutoff frequencies of the Butterworth filter to maximize the AUC. Using the images obtained with the optimal iteration and cutoff frequency and ADS compensation, we performed human observer studies for four count levels to validate the CHO results. Both CHO and human observer studies demonstrated that observer performance was dependent on the relative magnitude of the quantum noise and the patient variation. When the count level was high, the patient variation dominated, and the AUC increased very slowly with changes in the count level for the same level of anatomic variability. When the count level was low, however, quantum noise dominated, and changes in the count level
Quasiparticle features and level statistics of odd-odd nucleus
International Nuclear Information System (INIS)
Cheng Nanpu; Zheng Renrong; Zhu Shunquan
2001-01-01
The energy levels of the odd-odd nucleus 84 Y are calculated by using the axially symmetric rotor plus quasiparticles model. The two standard statistical tests of Random-Matrix Theory such as the distribution function p(s) of the nearest-neighbor level spacings (NNS) and the spectral rigidity Δ 3 are used to explore the statistical properties of the energy levels. By analyzing the properties of p(s) and Δ 3 under various conditions, the authors find that the quasiparticle features mainly affect the statistical properties of the odd-odd nucleus 84 Y through the recoil term and the Coriolis force in this theoretical mode, and that the chaotic degree of the energy levels decreases with the decreasing of the Fermi energy and the energy-gap parameters. The effect of the recoil term is small while the Coriolis force plays a major role in the spectral structure of 84 Y
Feature-Based Statistical Analysis of Combustion Simulation Data
Energy Technology Data Exchange (ETDEWEB)
Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T
2011-11-18
We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion
Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben
2017-09-15
Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Characteristics of level-spacing statistics in chaotic graphene billiards.
Huang, Liang; Lai, Ying-Cheng; Grebogi, Celso
2011-03-01
A fundamental result in nonrelativistic quantum nonlinear dynamics is that the spectral statistics of quantum systems that possess no geometric symmetry, but whose classical dynamics are chaotic, are described by those of the Gaussian orthogonal ensemble (GOE) or the Gaussian unitary ensemble (GUE), in the presence or absence of time-reversal symmetry, respectively. For massless spin-half particles such as neutrinos in relativistic quantum mechanics in a chaotic billiard, the seminal work of Berry and Mondragon established the GUE nature of the level-spacing statistics, due to the combination of the chirality of Dirac particles and the confinement, which breaks the time-reversal symmetry. A question is whether the GOE or the GUE statistics can be observed in experimentally accessible, relativistic quantum systems. We demonstrate, using graphene confinements in which the quasiparticle motions are governed by the Dirac equation in the low-energy regime, that the level-spacing statistics are persistently those of GOE random matrices. We present extensive numerical evidence obtained from the tight-binding approach and a physical explanation for the GOE statistics. We also find that the presence of a weak magnetic field switches the statistics to those of GUE. For a strong magnetic field, Landau levels become influential, causing the level-spacing distribution to deviate markedly from the random-matrix predictions. Issues addressed also include the effects of a number of realistic factors on level-spacing statistics such as next nearest-neighbor interactions, different lattice orientations, enhanced hopping energy for atoms on the boundary, and staggered potential due to graphene-substrate interactions.
Monte Carlo Simulation in Statistical Physics An Introduction
Binder, Kurt
2010-01-01
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...
Statistical cluster analysis and diagnosis of nuclear system level performance
International Nuclear Information System (INIS)
Teichmann, T.; Levine, M.M.; Samanta, P.K.; Kato, W.Y.
1985-01-01
The complexity of individual nuclear power plants and the importance of maintaining reliable and safe operations makes it desirable to complement the deterministic analyses of these plants by corresponding statistical surveys and diagnoses. Based on such investigations, one can then explore, statistically, the anticipation, prevention, and when necessary, the control of such failures and malfunctions. This paper, and the accompanying one by Samanta et al., describe some of the initial steps in exploring the feasibility of setting up such a program on an integrated and global (industry-wide) basis. The conceptual statistical and data framework was originally outlined in BNL/NUREG-51609, NUREG/CR-3026, and the present work aims at showing how some important elements might be implemented in a practical way (albeit using hypothetical or simulated data)
Universality of correlations of levels with discrete statistics
Brezin, Edouard; Kazakov, Vladimir
1999-01-01
We study the statistics of a system of N random levels with integer values, in the presence of a logarithmic repulsive potential of Dyson type. This probleme arises in sums over representations (Young tableaux) of GL(N) in various matrix problems and in the study of statistics of partitions for the permutation group. The model is generalized to include an external source and its correlators are found in closed form for any N. We reproduce the density of levels in the large N and double scalin...
Evaluation of clustering statistics with N-body simulations
International Nuclear Information System (INIS)
Quinn, T.R.
1986-01-01
Two series of N-body simulations are used to determine the effectiveness of various clustering statistics in revealing initial conditions from evolved models. All the simulations contained 16384 particles and were integrated with the PPPM code. One series is a family of models with power at only one wavelength. The family contains five models with the wavelength of the power separated by factors of √2. The second series is a family of all equal power combinations of two wavelengths taken from the first series. The clustering statistics examined are the two point correlation function, the multiplicity function, the nearest neighbor distribution, the void probability distribution, the distribution of counts in cells, and the peculiar velocity distribution. It is found that the covariance function, the nearest neighbor distribution, and the void probability distribution are relatively insensitive to the initial conditions. The distribution of counts in cells show a little more sensitivity, but the multiplicity function is the best of the statistics considered for revealing the initial conditions
Simulation of statistical γ-spectra of highly excited rare earth nuclei
International Nuclear Information System (INIS)
Schiller, A.; Munos, G.; Guttormsen, M.; Bergholt, L.; Melby, E.; Rekstad, J.; Siem, S.; Tveter, T.S.
1997-05-01
The statistical γ-spectra of highly excited even-even rare earth nuclei are simulated applying appropriate level density and strength function to a given nucleus. Hindrance effects due to K-conservation are taken into account. Simulations are compared to experimental data from the 163 Dy( 3 He,α) 162 Dy and 173 Yb( 3 He,α) 172 Yb reactions. The influence of the K quantum number at higher energies is discussed. 21 refs., 7 figs., 2 tabs
Directory of Open Access Journals (Sweden)
Laura Badenes-Ribera
2018-06-01
Full Text Available Introduction: Publications arguing against the null hypothesis significance testing (NHST procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES, confidence intervals (CIs, and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study.Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years. The mean number of years in the position of professor was 12.90 (SD = 10.21.Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices.Conclusions: Changing statistical practice is not
Confidence Level Computation for Combining Searches with Small Statistics
Junk, Thomas
1999-01-01
This article describes an efficient procedure for computing approximate confidence levels for searches for new particles where the expected signal and background levels are small enough to require the use of Poisson statistics. The results of many independent searches for the same particle may be combined easily, regardless of the discriminating variables which may be measured for the candidate events. The effects of systematic uncertainty in the signal and background models are incorporated ...
Optimal allocation of testing resources for statistical simulations
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Statistical simulations of machine errors for LINAC4
Baylac, M.; Froidefond, E.; Sargsyan, E.
2006-01-01
LINAC 4 is a normal conducting H- linac proposed at CERN to provide a higher proton flux to the CERN accelerator chain. It should replace the existing LINAC 2 as injector to the Proton Synchrotron Booster and can also operate in the future as the front end of the SPL, a 3.5 GeV Superconductingg Proton Linac. LINAC 4 consists of a Radio-Frequency Quadrupole, a chopper line, a Drift Tube Linac (DTL) and a Cell Coupled DTL all operating at 352 MHz and finally a Side Coupled Linac at 704 MHz. Beam dynamics was studied and optimized performing end-to-end simulations. This paper presents statistical simulations of machine errors which were performed in order to validate the proposed design.
Introduction to statistical physics and to computer simulations
Casquilho, João Paulo
2015-01-01
Rigorous and comprehensive, this textbook introduces undergraduate students to simulation methods in statistical physics. The book covers a number of topics, including the thermodynamics of magnetic and electric systems; the quantum-mechanical basis of magnetism; ferrimagnetism, antiferromagnetism, spin waves and magnons; liquid crystals as a non-ideal system of technological relevance; and diffusion in an external potential. It also covers hot topics such as cosmic microwave background, magnetic cooling and Bose-Einstein condensation. The book provides an elementary introduction to simulation methods through algorithms in pseudocode for random walks, the 2D Ising model, and a model liquid crystal. Any formalism is kept simple and derivations are worked out in detail to ensure the material is accessible to students from subjects other than physics.
A neighborhood statistics model for predicting stream pathogen indicator levels.
Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S
2015-03-01
Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.
A New Approach to Monte Carlo Simulations in Statistical Physics
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Statistical properties of dynamical systems – Simulation and abstract computation
International Nuclear Information System (INIS)
Galatolo, Stefano; Hoyrup, Mathieu; Rojas, Cristóbal
2012-01-01
Highlights: ► A survey on results about computation and computability on the statistical properties of dynamical systems. ► Computability and non-computability results for invariant measures. ► A short proof for the computability of the convergence speed of ergodic averages. ► A kind of “constructive” version of the pointwise ergodic theorem. - Abstract: We survey an area of recent development, relating dynamics to theoretical computer science. We discuss some aspects of the theoretical simulation and computation of the long term behavior of dynamical systems. We will focus on the statistical limiting behavior and invariant measures. We present a general method allowing the algorithmic approximation at any given accuracy of invariant measures. The method can be applied in many interesting cases, as we shall explain. On the other hand, we exhibit some examples where the algorithmic approximation of invariant measures is not possible. We also explain how it is possible to compute the speed of convergence of ergodic averages (when the system is known exactly) and how this entails the computation of arbitrarily good approximations of points of the space having typical statistical behaviour (a sort of constructive version of the pointwise ergodic theorem).
Single photon laser altimeter simulator and statistical signal processing
Vacek, Michael; Prochazka, Ivan
2013-05-01
Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.
Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories
Directory of Open Access Journals (Sweden)
Jenny Farmer
2017-11-01
Full Text Available Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.
The Heuristics of Statistical Argumentation: Scaffolding at the Postsecondary Level
Pardue, Teneal Messer
2017-01-01
Language plays a key role in statistics and, by extension, in statistics education. Enculturating students into the practice of statistics requires preparing them to communicate results of data analysis. Statistical argumentation is one way of providing structure to facilitate discourse in the statistics classroom. In this study, a teaching…
Wave optics simulation of statistically rough surface scatter
Lanari, Ann M.; Butler, Samuel D.; Marciniak, Michael; Spencer, Mark F.
2017-09-01
The bidirectional reflectance distribution function (BRDF) describes optical scatter from surfaces by relating the incident irradiance to the exiting radiance over the entire hemisphere. Laboratory verification of BRDF models and experimentally populated BRDF databases are hampered by sparsity of monochromatic sources and ability to statistically control the surface features. Numerical methods are able to control surface features, have wavelength agility, and via Fourier methods of wave propagation, may be used to fill the knowledge gap. Monte-Carlo techniques, adapted from turbulence simulations, generate Gaussian distributed and correlated surfaces with an area of 1 cm2 , RMS surface height of 2.5 μm, and correlation length of 100 μm. The surface is centered inside a Kirchhoff absorbing boundary with an area of 16 cm2 to prevent wrap around aliasing in the far field. These surfaces are uniformly illuminated at normal incidence with a unit amplitude plane-wave varying in wavelength from 3 μm to 5 μm. The resultant scatter is propagated to a detector in the far field utilizing multi-step Fresnel Convolution and observed at angles from -2 μrad to 2 μrad. The far field scatter is compared to both a physical wave optics BRDF model (Modified Beckmann Kirchhoff) and two microfacet BRDF Models (Priest, and Cook-Torrance). Modified Beckmann Kirchhoff, which accounts for diffraction, is consistent with simulated scatter for multiple wavelengths for RMS surface heights greater than λ/2. The microfacet models, which assume geometric optics, are less consistent across wavelengths. Both model types over predict far field scatter width for RMS surface heights less than λ/2.
Simulation of decreasing reactor power level with BWR simulator
International Nuclear Information System (INIS)
Suwoto; Zuhair; Rivai, Abu Khalid
2002-01-01
Study on characteristic of BWR using Desktop PC Based Simulator Program was analysed. This simulator is more efficient and cheaper for analyzing of characteristic and dynamic respond than full scope simulator for decreasing power level of BW. Dynamic responses of BWR reactor was investigated during the power level reduction from 100% FP (Full Power) which is 3926 MWth to 0% FP with 25% steps and 1 % FP/sec rate. The overall results for core flow rate, reactor steam flow, feed-water flow and turbine-generator power show tendency proportional to reduction of reactor power. This results show that reactor power control in BWR could be done by control of re-circulation flow that alter the density of water used as coolant and moderator. Decreasing the re-circulation flow rate will decrease void density which has negative reactivity and also affect the position of control rods
Statistical analysis of global surface air temperature and sea level using cointegration methods
DEFF Research Database (Denmark)
Schmith, Torben; Johansen, Søren; Thejll, Peter
Global sea levels are rising which is widely understood as a consequence of thermal expansion and melting of glaciers and land-based ice caps. Due to physically-based models being unable to simulate observed sea level trends, semi-empirical models have been applied as an alternative for projecting...... of future sea levels. There is in this, however, potential pitfalls due to the trending nature of the time series. We apply a statistical method called cointegration analysis to observed global sea level and surface air temperature, capable of handling such peculiarities. We find a relationship between sea...... level and temperature and find that temperature causally depends on the sea level, which can be understood as a consequence of the large heat capacity of the ocean. We further find that the warming episode in the 1940s is exceptional in the sense that sea level and warming deviates from the expected...
Statistical Indicators for Religious Studies: Indicators of Level and Structure
Herteliu, Claudiu; Isaic-Maniu, Alexandru
2009-01-01
Using statistic indicators as vectors of information relative to the operational status of a phenomenon, including a religious one, is unanimously accepted. By introducing a system of statistic indicators we can also analyze the interfacing areas of a phenomenon. In this context, we have elaborated a system of statistic indicators specific to the…
Energy Technology Data Exchange (ETDEWEB)
Huenicke, B. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Kuestenforschung
2008-11-06
This study aims at the estimation of the impact of different atmospheric factors on the past sealevel variations (up to 200 years) in the Baltic Sea by statistically analysing the relationship between Baltic Sea level records and observational and proxy-based reconstructed climatic data sets. The focus lies on the identification and possible quantification of the contribution of sealevel pressure (wind), air-temperature and precipitation to the low-frequency (decadal and multi-decadal) variability of Baltic Sea level. It is known that the wind forcing is the main factor explaining average Baltic Sea level variability at inter-annual to decadal timescales, especially in wintertime. In this thesis it is statistically estimated to what extent other regional climate factors contribute to the spatially heterogeneous Baltic Sea level variations around the isostatic trend at multi-decadal timescales. Although the statistical analysis cannot be completely conclusive, as the potential climate drivers are all statistically interrelated to some degree, the results indicate that precipitation should be taken into account as an explanatory variable for sea-level variations. On the one hand it has been detected that the amplitude of the annual cycle of Baltic Sea level has increased throughout the 20th century and precipitation seems to be the only factor among those analysed (wind through SLP field, barometric effect, temperature and precipitation) that can account for this evolution. On the other hand, precipitation increases the ability to hindcast inter-annual variations of sea level in some regions and seasons, especially in the Southern Baltic in summertime. The mechanism by which precipitation exerts its influence on Baltic Sea level is not ascertained in this statistical analysis due to the lack of long salinity time series. This result, however, represents a working hypothesis that can be confirmed or disproved by long simulations of the Baltic Sea system - ocean
Statistical analysis of global surface temperature and sea level using cointegration methods
DEFF Research Database (Denmark)
Schmidt, Torben; Johansen, Søren; Thejll, Peter
2012-01-01
Global sea levels are rising which is widely understood as a consequence of thermal expansion and melting of glaciers and land-based ice caps. Due to the lack of representation of ice-sheet dynamics in present-day physically-based climate models being unable to simulate observed sea level trends......, semi-empirical models have been applied as an alternative for projecting of future sea levels. There is in this, however, potential pitfalls due to the trending nature of the time series. We apply a statistical method called cointegration analysis to observed global sea level and land-ocean surface air...... temperature, capable of handling such peculiarities. We find a relationship between sea level and temperature and find that temperature causally depends on the sea level, which can be understood as a consequence of the large heat capacity of the ocean. We further find that the warming episode in the 1940s...
A spatial scan statistic for nonisotropic two-level risk cluster.
Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie
2012-01-30
Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.
CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY
Directory of Open Access Journals (Sweden)
ILEANA BRUDIU
2009-05-01
Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.
Prediction of dimethyl disulfide levels from biosolids using statistical modeling.
Gabriel, Steven A; Vilalai, Sirapong; Arispe, Susanna; Kim, Hyunook; McConnell, Laura L; Torrents, Alba; Peot, Christopher; Ramirez, Mark
2005-01-01
Two statistical models were used to predict the concentration of dimethyl disulfide (DMDS) released from biosolids produced by an advanced wastewater treatment plant (WWTP) located in Washington, DC, USA. The plant concentrates sludge from primary sedimentation basins in gravity thickeners (GT) and sludge from secondary sedimentation basins in dissolved air flotation (DAF) thickeners. The thickened sludge is pumped into blending tanks and then fed into centrifuges for dewatering. The dewatered sludge is then conditioned with lime before trucking out from the plant. DMDS, along with other volatile sulfur and nitrogen-containing chemicals, is known to contribute to biosolids odors. These models identified oxidation/reduction potential (ORP) values of a GT and DAF, the amount of sludge dewatered by centrifuges, and the blend ratio between GT thickened sludge and DAF thickened sludge in blending tanks as control variables. The accuracy of the developed regression models was evaluated by checking the adjusted R2 of the regression as well as the signs of coefficients associated with each variable. In general, both models explained observed DMDS levels in sludge headspace samples. The adjusted R2 value of the regression models 1 and 2 were 0.79 and 0.77, respectively. Coefficients for each regression model also had the correct sign. Using the developed models, plant operators can adjust the controllable variables to proactively decrease this odorant. Therefore, these models are a useful tool in biosolids management at WWTPs.
Direct Numerical Simulations of Statistically Stationary Turbulent Premixed Flames
Im, Hong G.
2016-07-15
Direct numerical simulations (DNS) of turbulent combustion have evolved tremendously in the past decades, thanks to the rapid advances in high performance computing technology. Today’s DNS is capable of incorporating detailed reaction mechanisms and transport properties of hydrocarbon fuels, with physical parameter ranges approaching laboratory scale flames, thereby allowing direct comparison and cross-validation against laser diagnostic measurements. While these developments have led to significantly improved understanding of fundamental turbulent flame characteristics, there are increasing demands to explore combustion regimes at higher levels of turbulent Reynolds (Re) and Karlovitz (Ka) numbers, with a practical interest in new combustion engines driving towards higher efficiencies and lower emissions. The article attempts to provide a brief overview of the state-of-the-art DNS of turbulent premixed flames at high Re/Ka conditions, with an emphasis on homogeneous and isotropic turbulent flow configurations. Some important qualitative findings from numerical studies are summarized, new analytical approaches to investigate intensely turbulent premixed flame dynamics are discussed, and topics for future research are suggested. © 2016 Taylor & Francis.
Directory of Open Access Journals (Sweden)
Knyazheva Yu. V.
2014-06-01
Full Text Available The market economy causes need of development of the economic analysis first of all at microlevel, that is at the level of the separate enterprises as the enterprises are basis for market economy. Therefore improvement of the queuing system trading enterprise is an important economic problem. Analytical solutions of problems of the mass servicing are in described the theory, don’t correspond to real operating conditions of the queuing systems. Therefore in this article optimization of customer service process and improvement of settlement and cash service system trading enterprise are made by means of numerical statistical simulation of the queuing system trading enterprise. The article describe integrated statistical numerical simulation model of queuing systems trading enterprise working in nonstationary conditions with reference to different distribution laws of customers input stream. This model takes account of various behavior customers output stream, includes checkout service model which takes account of cashier rate of working, also this model includes staff motivation model, profit earning and profit optimization models that take into account possible revenue and costs. The created statistical numerical simulation model of queuing systems trading enterprise, at its realization in the suitable software environment, allows to perform optimization of the most important parameters of system. And when developing the convenient user interface, this model can be a component of support decision-making system for rationalization of organizational structure and for management optimization by trading enterprise.
Direct numerical simulation and statistical analysis of turbulent convection in lead-bismuth
Energy Technology Data Exchange (ETDEWEB)
Otic, I.; Grotzbach, G. [Forschungszentrum Karlsruhe GmbH, Institut fuer Kern-und Energietechnik (Germany)
2003-07-01
Improved turbulent heat flux models are required to develop and analyze the reactor concept of an lead-bismuth cooled Accelerator-Driven-System. Because of specific properties of many liquid metals we have still no sensors for accurate measurements of the high frequency velocity fluctuations. So, the development of the turbulent heat transfer models which are required in our CFD (computational fluid dynamics) tools needs also data from direct numerical simulations of turbulent flows. We use new simulation results for the model problem of Rayleigh-Benard convection to show some peculiarities of the turbulent natural convection in lead-bismuth (Pr = 0.025). Simulations for this flow at sufficiently large turbulence levels became only recently feasible because this flow requires the resolution of very small velocity scales with the need for recording long-wave structures for the slow changes in the convective temperature field. The results are analyzed regarding the principle convection and heat transfer features. They are also used to perform statistical analysis to show that the currently available modeling is indeed not adequate for these fluids. Basing on the knowledge of the details of the statistical features of turbulence in this convection type and using the two-point correlation technique, a proposal for an improved statistical turbulence model is developed which is expected to account better for the peculiarities of the heat transfer in the turbulent convection in low Prandtl number fluids. (authors)
The Cosmogrid simulation: Statistical properties of small dark matter halos
Ishiyama, T.; Rieder, S.; Makino, J.; Portegies Zwart, S.; Groen, D.; Nitadori, K.; de Laat, C.; McMillan, S.; Hiraki, K.; Harfst, S.
2013-01-01
We present the results of the "Cosmogrid" cosmological N-body simulation suites based on the concordance LCDM model. The Cosmogrid simulation was performed in a 30 Mpc box with 20483 particles. The mass of each particle is 1.28 × 105 M⊙, which is sufficient to resolve ultra-faint dwarfs. We found
Pimenta, Rui; Nascimento, Ana; Vieira, Margarida; Costa, Elísio
2010-01-01
In previous works, we evaluated the statistical reasoning ability acquired by health sciences’ students carrying out their final undergraduate project. We found that these students achieved a good level of statistical literacy and reasoning in descriptive statistics. However, concerning inferential statistics the students did not reach a similar level. Statistics educators therefore claim for more effective ways to learn statistics such as project based investigations. These can be simulat...
The Communicability of Graphical Alternatives to Tabular Displays of Statistical Simulation Studies
Cook, Alex R.; Teo, Shanice W. L.
2011-01-01
Simulation studies are often used to assess the frequency properties and optimality of statistical methods. They are typically reported in tables, which may contain hundreds of figures to be contrasted over multiple dimensions. To assess the degree to which these tables are fit for purpose, we performed a randomised cross-over experiment in which statisticians were asked to extract information from (i) such a table sourced from the literature and (ii) a graphical adaptation designed by the authors, and were timed and assessed for accuracy. We developed hierarchical models accounting for differences between individuals of different experience levels (under- and post-graduate), within experience levels, and between different table-graph pairs. In our experiment, information could be extracted quicker and, for less experienced participants, more accurately from graphical presentations than tabular displays. We also performed a literature review to assess the prevalence of hard-to-interpret design features in tables of simulation studies in three popular statistics journals, finding that many are presented innumerately. We recommend simulation studies be presented in graphical form. PMID:22132184
Direct Numerical Simulations of Statistically Stationary Turbulent Premixed Flames
Im, Hong G.; Arias, Paul G.; Chaudhuri, Swetaprovo; Uranakara, Harshavardhana A.
2016-01-01
Direct numerical simulations (DNS) of turbulent combustion have evolved tremendously in the past decades, thanks to the rapid advances in high performance computing technology. Today’s DNS is capable of incorporating detailed reaction mechanisms
Multi-Accuracy-Level Burning Plasma Simulations
International Nuclear Information System (INIS)
Artaud, J. F.; Basiuk, V.; Garcia, J.; Giruzzi, G.; Huynh, P.; Huysmans, G.; Imbeaux, F.; Johner, J.; Scheider, M.
2007-01-01
The design of a reactor grade tokamak is based on a hierarchy of tools. We present here three codes that are presently used for the simulations of burning plasmas. At the first level there is a 0-dimensional code that allows to choose a reasonable range of global parameters; in our case the HELIOS code was used for this task. For the second level we have developed a mixed 0-D / 1-D code called METIS that allows to study the main properties of a burning plasma, including profiles and all heat and current sources, but always under the constraint of energy and other empirical scaling laws. METIS is a fast code that permits to perform a large number of runs (a run takes about one minute) and design the main features of a scenario, or validate the results of the 0-D code on a full time evolution. At the top level, we used the full 1D1/2 suite of codes CRONOS that gives access to a detailed study of the plasma profiles evolution. CRONOS can use a variety of modules for source terms and transport coefficients computation with different level of complexity and accuracy: from simple estimators to highly sophisticated physics calculations. Thus it is possible to vary the accuracy of burning plasma simulations, as a trade-off with computation time. A wide range of scenario studies can thus be made with CRONOS and then validated with post-processing tools like MHD stability analysis. We will present in this paper results of this multi-level analysis applied to the ITER hybrid scenario. This specific example will illustrate the importance of having several tools for the study of burning plasma scenarios, especially in a domain that present devices cannot access experimentally. (Author)
Statistical interpretation of low energy nuclear level schemes
Energy Technology Data Exchange (ETDEWEB)
Egidy, T von; Schmidt, H H; Behkami, A N
1988-01-01
Nuclear level schemes and neutron resonance spacings yield information on level densities and level spacing distributions. A total of 75 nuclear level schemes with 1761 levels and known spins and parities was investigated. The A-dependence of level density parameters is discussed. The spacing distributions of levels near the groundstate indicate transitional character between regular and chaotic properties while chaos dominates near the neutron binding energy.
Jeffrey P. Prestemon
2009-01-01
Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...
Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas
2002-01-01
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
Likert scales, levels of measurement and the "laws" of statistics.
Norman, Geoff
2010-12-01
Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for "getting the wrong answer".
Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.
2018-01-01
Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.
Testing a statistical method of global mean palotemperature estimations in a long climate simulation
Energy Technology Data Exchange (ETDEWEB)
Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik
2001-07-01
Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)
Solar radiation data - statistical analysis and simulation models
Energy Technology Data Exchange (ETDEWEB)
Mustacchi, C; Cena, V; Rocchi, M; Haghigat, F
1984-01-01
The activities consisted in collecting meteorological data on magnetic tape for ten european locations (with latitudes ranging from 42/sup 0/ to 56/sup 0/ N), analysing the multi-year sequences, developing mathematical models to generate synthetic sequences having the same statistical properties of the original data sets, and producing one or more Short Reference Years (SRY's) for each location. The meteorological parameters examinated were (for all the locations) global + diffuse radiation on horizontal surface, dry bulb temperature, sunshine duration. For some of the locations additional parameters were available, namely, global, beam and diffuse radiation on surfaces other than horizontal, wet bulb temperature, wind velocity, cloud type, cloud cover. The statistical properties investigated were mean, variance, autocorrelation, crosscorrelation with selected parameters, probability density function. For all the meteorological parameters, various mathematical models were built: linear regression, stochastic models of the AR and the DAR type. In each case, the model with the best statistical behaviour was selected for the production of a SRY for the relevant parameter/location.
Heterogeneous Rock Simulation Using DIP-Micromechanics-Statistical Methods
Directory of Open Access Journals (Sweden)
H. Molladavoodi
2018-01-01
Full Text Available Rock as a natural material is heterogeneous. Rock material consists of minerals, crystals, cement, grains, and microcracks. Each component of rock has a different mechanical behavior under applied loading condition. Therefore, rock component distribution has an important effect on rock mechanical behavior, especially in the postpeak region. In this paper, the rock sample was studied by digital image processing (DIP, micromechanics, and statistical methods. Using image processing, volume fractions of the rock minerals composing the rock sample were evaluated precisely. The mechanical properties of the rock matrix were determined based on upscaling micromechanics. In order to consider the rock heterogeneities effect on mechanical behavior, the heterogeneity index was calculated in a framework of statistical method. A Weibull distribution function was fitted to the Young modulus distribution of minerals. Finally, statistical and Mohr–Coulomb strain-softening models were used simultaneously as a constitutive model in DEM code. The acoustic emission, strain energy release, and the effect of rock heterogeneities on the postpeak behavior process were investigated. The numerical results are in good agreement with experimental data.
Statistical error in simulations of Poisson processes: Example of diffusion in solids
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios
Directory of Open Access Journals (Sweden)
Hiroshi Shiraishi
2012-01-01
Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.
Statistical gravitational waveform models: What to simulate next?
Doctor, Zoheyr; Farr, Ben; Holz, Daniel E.; Pürrer, Michael
2017-12-01
Models of gravitational waveforms play a critical role in detecting and characterizing the gravitational waves (GWs) from compact binary coalescences. Waveforms from numerical relativity (NR), while highly accurate, are too computationally expensive to produce to be directly used with Bayesian parameter estimation tools like Markov-chain-Monte-Carlo and nested sampling. We propose a Gaussian process regression (GPR) method to generate reduced-order-model waveforms based only on existing accurate (e.g. NR) simulations. Using a training set of simulated waveforms, our GPR approach produces interpolated waveforms along with uncertainties across the parameter space. As a proof of concept, we use a training set of IMRPhenomD waveforms to build a GPR model in the 2-d parameter space of mass ratio q and equal-and-aligned spin χ1=χ2. Using a regular, equally-spaced grid of 120 IMRPhenomD training waveforms in q ∈[1 ,3 ] and χ1∈[-0.5 ,0.5 ], the GPR mean approximates IMRPhenomD in this space to mismatches below 4.3 ×10-5. Our approach could in principle use training waveforms directly from numerical relativity. Beyond interpolation of waveforms, we also present a greedy algorithm that utilizes the errors provided by our GPR model to optimize the placement of future simulations. In a fiducial test case we find that using the greedy algorithm to iteratively add simulations achieves GPR errors that are ˜1 order of magnitude lower than the errors from using Latin-hypercube or square training grids.
Statistical Analysis of Detailed 3-D CFD LES Simulations with Regard to CCV Modeling
Directory of Open Access Journals (Sweden)
Vítek Oldřich
2016-06-01
Full Text Available The paper deals with statistical analysis of large amount of detailed 3-D CFD data in terms of cycle-to-cycle variations (CCVs. These data were obtained by means of LES calculations of many consecutive cycles. Due to non-linear nature of Navier-Stokes equation set, there is a relatively significant CCV. Hence, every cycle is slightly different – this leads to requirement to perform statistical analysis based on ensemble averaging procedure which enables better understanding of CCV in ICE including its quantification. The data obtained from the averaging procedure provides results on different space resolution levels. The procedure is applied locally, i.e., in every cell of the mesh. Hence there is detailed CCV information on local level – such information can be compared with RANS simulations. Next, volume/mass averaging provides information at specific locations – e.g., gap between electrodes of a spark plug. Finally, volume/mass averaging of the whole combustion chamber leads to global information which can be compared with experimental data or results of system simulation tools (which are based on 0-D/1-D approach.
Exclusive observables from a statistical simulation of energetic nuclear collisions
International Nuclear Information System (INIS)
Fai, G.
1983-01-01
Exclusive observables are calculated in the framework of a statistical model for medium-energy nuclear collisions. The collision system is divided into a few (participant/spectator) sources, that are assumed to disassemble independently. Sufficiently excited sources explode into pions, nucleons, and composite, possibly particle unstable, nuclei. The different final states compete according to their microcanonical weight. Less excited sources, and the unstable explosion products, deexcite via light-particle evaporation. The model has been implemented as a Monte Carlo computer code that is sufficiently efficient to permit generation of large event samples. Some illustrative applications are discussed. (author)
Statistics of LES simulations of large wind farms
DEFF Research Database (Denmark)
Andersen, Søren Juhl; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming
2016-01-01
. The statistical moments appear to collapse and hence the turbulence inside large wind farms can potentially be scaled accordingly. The thrust coefficient is estimated by two different reference velocities and the generic CT expression by Frandsen. A reference velocity derived from the power production is shown...... to give very good agreement and furthermore enables the very good estimation of the thrust force using only the steady CT-curve, even for very short time samples. Finally, the effective turbulence inside large wind farms and the equivalent loads are examined....
Directory of Open Access Journals (Sweden)
Fiser Ondrej
2011-01-01
Full Text Available Long-term monthly and annual statistics of the attenuation of electromagnetic waves that have been obtained from 6 years of measurements on a free space optical path, 853 meters long, with a wavelength of 850 nm and on a precisely parallel radio path with a frequency of 58 GHz are presented. All the attenuation events observed are systematically classified according to the hydrometeor type causing the particular event. Monthly and yearly propagation statistics on the free space optical path and radio path are obtained. The influence of individual hydrometeors on attenuation is analysed. The obtained propagation statistics are compared to the calculated statistics using ITU-R models. The calculated attenuation statistics both at 850 nm and 58 GHz underestimate the measured statistics for higher attenuation levels. The availability performance of a simulated hybrid FSO/RF system is analysed based on the measured data.
Employment of Lithuanian Statistical Data Into Tax-Benefit Micro-Simulation Models
Directory of Open Access Journals (Sweden)
Viginta Ivaškaitė-Tamošiūnė
2013-01-01
Full Text Available In this study, we aim to assess the “best fit” of the existing Lithuanian micro-datasets for constructing a national micro-simulation model. Specifically, we compare and evaluate the potential of two (state level representative micro-data surveys in terms of their potential to simulate Lithuanian (direct taxes, social contributions and social benefits. Both selected datasets contain rich information on the socio-economic and demographical conditions of the country: the Household Budget Survey (HBS for the years 2004 and 2005 and the European Community Statistics on Income and Living Conditions (EU-SILC in Lithuania for the year 2005. The selected databases offer the most comprehensive range of income and other socio-demographic attributes, needed for simulation of tax and contributions’ payers/amounts and benefits’ recipients/amounts. The evaluation of the dataset capacity to simulate these measures is done by a comparative statistical analysis. Among the comparative categories are definitions (of households, incomes, survey collection modes, level of aggregation of various variables, demographic and incomes variables and corresponding numbers (amounts. The comparative analysis of the HBS and EU-SILC datasets shows, that despite embedded differences and shortages regarding simulation capacities of both surveys, these datasets contain valuable and sufficient information for the purpose of simulation of Lithuanian tax-benefit policies. In general a conclusion could be drawn, that HBS offers higher possibilities of simulating the Lithuanian tax-benefit system. This dataset contains more detailed national income categories (i.e. recipients of maternity/paternity insurance, diverse pensions, etc.— information on which is not available in the EU-SILC. The latter dataset does not contain national policy system specific components, but offer information on income aggregates, such all old-age pensions, social exclusion benefits, etc
Employment of Lithuanian Statistical Data Into Tax-Benefit Micro-Simulation Models
Directory of Open Access Journals (Sweden)
Viginta Ivaškaitė-Tamošiūnė
2012-07-01
Full Text Available In this study, we aim to assess the “best fit” of the existing Lithuanian micro-datasets for constructing a national micro-simulation model. Specifically, we compare and evaluate the potential of two (state level representative micro-data surveys in terms of their potential to simulate Lithuanian (direct taxes, social contributions and social benefits. Both selected datasets contain rich information on the socio-economic and demographical conditions of the country: the Household Budget Survey (HBS for the years 2004 and 2005 and the European Community Statistics on Income and Living Conditions (EU-SILC in Lithuania for the year 2005. The selected databases offer the most comprehensive range of income and other socio-demographic attributes, needed for simulation of tax and contributions’ payers/amounts and benefits’ recipients/amounts. The evaluation of the dataset capacity to simulate these measures is done by a comparative statistical analysis. Among the comparative categories are definitions (of households, incomes, survey collection modes, level of aggregation of various variables, demographic and incomes variables and corresponding numbers (amounts.The comparative analysis of the HBS and EU-SILC datasets shows, that despite embedded differences and shortages regarding simulation capacities of both surveys, these datasets contain valuable and sufficient information for the purpose of simulation of Lithuanian tax-benefit policies.In general a conclusion could be drawn, that HBS offers higher possibilities of simulating the Lithuanian tax-benefit system. This dataset contains more detailed national income categories (i.e. recipients of maternity/paternity insurance, diverse pensions, etc.— information on which is not available in the EU-SILC. The latter dataset does not contain national policy system specific components, but offer information on income aggregates, such all old-age pensions, social exclusion benefits, etc. Additionally
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Equilibrium statistical mechanics of strongly coupled plasmas by numerical simulation
International Nuclear Information System (INIS)
DeWitt, H.E.
1977-01-01
Numerical experiments using the Monte Carlo method have led to systematic and accurate results for the thermodynamic properties of strongly coupled one-component plasmas and mixtures of two nuclear components. These talks are intended to summarize the results of Monte Carlo simulations from Paris and from Livermore. Simple analytic expressions for the equation of state and other thermodynamic functions have been obtained in which there is a clear distinction between a lattice-like static portion and a thermal portion. The thermal energy for the one-component plasma has a simple power dependence on temperature, (kT)/sup 3 / 4 /, that is identical to Monte Carlo results obtained for strongly coupled fluids governed by repulsive l/r/sup n/ potentials. For two-component plasmas the ion-sphere model is shown to accurately represent the static portion of the energy. Electron screening is included in the Monte Carlo simulations using linear response theory and the Lindhard dielectric function. Free energy expressions have been constructed for one and two component plasmas that allow easy computation of all thermodynamic functions
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Orietta Zaniolo
2006-03-01
Full Text Available Significant advances in the management of hypercholesterolemia have been made possible by the development of statins, 3-hydroxy-3-methylglutaryl coenzyme A (HMG CoA reductase inhibitors. More recently, statins have demonstrated benefit in primary and secondary prevention of cardiovascular disease also in patients without hypercholesterolemia. Therefore statins help to reduce the impact of cardiovascular disease on morbility, mortality and social costs. Statins inhibit HMG-CoA reductase competitively, reduce LDL levels more than other cholesterol-lowering drugs, and lower triglyceride levels in hypertriglyceridemic patients. Prescribing statins as first line therapy in management of hypercholesterolemia as a part of a more comprehensive prevention program of cardiovascular disease is widely recommended by international guidelines (e.g. National Cholesterol Education Program - NCEP - Adult Treatment Panel - ATP- III reports. Currently in Italy there are five available statins: atorvastatin, fluvastatin, pravastatin, rosuvastatin and simvastatin; each of them presents some differences in physical and chemical characteristics (solubility, pharmacokinetics (absorption, proteic binding, metabolism and excretion and pharmacodinamics (pleiotropic effects. Compared to other statins, fluvastatin extended-release (RP 80 mg provides an equal efficacy in lowering total cholesterol and low-density lipoprotein cholesterol (LDL-C, with an important action on triglyceride (TG levels and superior increases in HDL-C levels, reducing the incidence of major adverse cardiac events (MACE. Aim of this study is to outline an updated therapeutic and pharmacoeconomic profile of fluvastatin, particularly regarding extended-release (RP 80 mg formulation.
Statistical models to predict flows at monthly level in Salvajina
International Nuclear Information System (INIS)
Gonzalez, Harold O
1994-01-01
It thinks about and models of lineal regression evaluate at monthly level that they allow to predict flows in Salvajina, with base in predictions variable, like the difference of pressure between Darwin and Tahiti, precipitation in Piendamo Cauca), temperature in Port Chicama (Peru) and pressure in Tahiti
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Digital Repository Service at National Institute of Oceanography (India)
Srinivas, K.; Das, V.K.; DineshKumar, P.K.
This study investigates the suitability of statistical models for their predictive potential for the monthly mean sea level at different stations along the west and east coasts of the Indian subcontinent. Statistical modelling of the monthly mean...
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Distribution-level electricity reliability: Temporal trends using statistical analysis
International Nuclear Information System (INIS)
Eto, Joseph H.; LaCommare, Kristina H.; Larsen, Peter; Todd, Annika; Fisher, Emily
2012-01-01
This paper helps to address the lack of comprehensive, national-scale information on the reliability of the U.S. electric power system by assessing trends in U.S. electricity reliability based on the information reported by the electric utilities on power interruptions experienced by their customers. The research analyzes up to 10 years of electricity reliability information collected from 155 U.S. electric utilities, which together account for roughly 50% of total U.S. electricity sales. We find that reported annual average duration and annual average frequency of power interruptions have been increasing over time at a rate of approximately 2% annually. We find that, independent of this trend, installation or upgrade of an automated outage management system is correlated with an increase in the reported annual average duration of power interruptions. We also find that reliance on IEEE Standard 1366-2003 is correlated with higher reported reliability compared to reported reliability not using the IEEE standard. However, we caution that we cannot attribute reliance on the IEEE standard as having caused or led to higher reported reliability because we could not separate the effect of reliance on the IEEE standard from other utility-specific factors that may be correlated with reliance on the IEEE standard. - Highlights: ► We assess trends in electricity reliability based on the information reported by the electric utilities. ► We use rigorous statistical techniques to account for utility-specific differences. ► We find modest declines in reliability analyzing interruption duration and frequency experienced by utility customers. ► Installation or upgrade of an OMS is correlated to an increase in reported duration of power interruptions. ► We find reliance in IEEE Standard 1366 is correlated with higher reported reliability.
Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation
Energy Technology Data Exchange (ETDEWEB)
Zhang Yuxia [Department of Physics, Wuhan University, Wuhan 430072 (China); Sang Jianping [Department of Physics, Wuhan University, Wuhan 430072 (China); Department of Physics, Jianghan University, Wuhan 430056 (China); Zou Xianwu [Department of Physics, Wuhan University, Wuhan 430072 (China)]. E-mail: xwzou@whu.edu.cn; Jin Zhunzhi [Department of Physics, Wuhan University, Wuhan 430072 (China)
2005-09-26
The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices.
Random matrix theory of the energy-level statistics of disordered systems at the Anderson transition
Energy Technology Data Exchange (ETDEWEB)
Canali, C M
1995-09-01
We consider a family of random matrix ensembles (RME) invariant under similarity transformations and described by the probability density P(H) exp[-TrV(H)]. Dyson`s mean field theory (MFT) of the corresponding plasma model of eigenvalues is generalized to the case of weak confining potential, V(is an element of) {approx} A/2 ln{sup 2}(is an element of). The eigenvalue statistics derived from MFT are shown to deviate substantially from the classical Wigner-Dyson statistics when A < 1. By performing systematic Monte Carlo simulations on the plasma model, we compute all the relevant statistical properties of the RME with weak confinement. For A{sub c} approx. 0.4 the distribution function of the level spacings (LSDF) coincides in a large energy window with the energy LSDF of the three dimensional Anderson model at the metal-insulator transition. For the same A = A{sub c}, the RME eigenvalue-number variance is linear and its slope is equal to 0.32 {+-} 0.02, which is consistent with the value found for the Anderson model at the critical point. (author). 51 refs, 10 figs.
Directory of Open Access Journals (Sweden)
Heather E. Driscoll
2017-08-01
Full Text Available Here we describe microarray expression data (raw and normalized, experimental metadata, and gene-level data with expression statistics from Saccharomyces cerevisiae exposed to simulated asbestos mine drainage from the Vermont Asbestos Group (VAG Mine on Belvidere Mountain in northern Vermont, USA. For nearly 100 years (between the late 1890s and 1993, chrysotile asbestos fibers were extracted from serpentinized ultramafic rock at the VAG Mine for use in construction and manufacturing industries. Studies have shown that water courses and streambeds nearby have become contaminated with asbestos mine tailings runoff, including elevated levels of magnesium, nickel, chromium, and arsenic, elevated pH, and chrysotile asbestos-laden mine tailings, due to leaching and gradual erosion of massive piles of mine waste covering approximately 9 km2. We exposed yeast to simulated VAG Mine tailings leachate to help gain insight on how eukaryotic cells exposed to VAG Mine drainage may respond in the mine environment. Affymetrix GeneChip® Yeast Genome 2.0 Arrays were utilized to assess gene expression after 24-h exposure to simulated VAG Mine tailings runoff. The chemistry of mine-tailings leachate, mine-tailings leachate plus yeast extract peptone dextrose media, and control yeast extract peptone dextrose media is also reported. To our knowledge this is the first dataset to assess global gene expression patterns in a eukaryotic model system simulating asbestos mine tailings runoff exposure. Raw and normalized gene expression data are accessible through the National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO Database Series GSE89875 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE89875.
Hldreth, Laura A.; Robison-Cox, Jim; Schmidt, Jade
2018-01-01
This study examines the transferability of results from previous studies of simulation-based curriculum in introductory statistics using data from 3,500 students enrolled in an introductory statistics course at Montana State University from fall 2013 through spring 2016. During this time, four different curricula, a traditional curriculum and…
Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.
2009-01-01
We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data
Improving Statistics Education through Simulations: The Case of the Sampling Distribution.
Earley, Mark A.
This paper presents a summary of action research investigating statistics students' understandings of the sampling distribution of the mean. With four sections of an introductory Statistics in Education course (n=98 students), a computer simulation activity (R. delMas, J. Garfield, and B. Chance, 1999) was implemented and evaluated to show…
International Nuclear Information System (INIS)
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
Statistical analysis of simulation calculation of sputtering for two interaction potentials
International Nuclear Information System (INIS)
Shao Qiyun
1992-01-01
The effects of the interaction potentials (Moliere potential and Universal potential) are presented on computer simulation results of sputtering via Monte Carlo simulation based on the binary collision approximation. By means of Wilcoxon two-Sample paired sign rank test, the statistically significant difference for the above results is obtained
Statistical Methods for Assessments in Simulations and Serious Games. Research Report. ETS RR-14-12
Fu, Jianbin; Zapata, Diego; Mavronikolas, Elia
2014-01-01
Simulation or game-based assessments produce outcome data and process data. In this article, some statistical models that can potentially be used to analyze data from simulation or game-based assessments are introduced. Specifically, cognitive diagnostic models that can be used to estimate latent skills from outcome data so as to scale these…
Moraes, Alvaro
2015-01-01
Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference
International Nuclear Information System (INIS)
2001-01-01
For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
1999-01-01
For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
2003-01-01
For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products
International Nuclear Information System (INIS)
2004-01-01
For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Why is the Groundwater Level Rising? A Case Study Using HARTT to Simulate Groundwater Level Dynamic.
Yihdego, Yohannes; Danis, Cara; Paffard, Andrew
2017-12-01
Groundwater from a shallow unconfined aquifer at a site in coastal New South Wales has been causing recent water logging issues. A trend of rising groundwater level has been anecdotally observed over the last 10 years. It was not clear whether the changes in groundwater levels were solely natural variations within the groundwater system or whether human interference was driving the level up. Time series topographic images revealed significant surrounding land use changes and human modification to the environment of the groundwater catchment. A statistical model utilising HARTT (multiple linear regression hydrograph analysis method) simulated the groundwater level dynamics at five key monitoring locations and successfully showed a trend of rising groundwater level. Utilising hydrogeological input from field investigations, the model successfully simulated the rise in the water table over time to the present day levels, whilst taking into consideration rainfall and land changes. The underlying geological/land conditions were found to be just as significant as the impact of climate variation. The correlation coefficient for the monitoring bores (MB), excluding MB4, show that the groundwater level fluctuation can be explained by the climate variable (rainfall) with the lag time between the atypical rainfall and groundwater level ranging from 4 to 7 months. The low R2 value for MB4 indicates that there are factors missing in the model which are primarily related to human interference. The elevated groundwater levels in the affected area are the result of long term cumulative land use changes, instigated by humans, which have directly resulted in detrimental changes to the groundwater aquifer properties.
Statistical analysis of the acceleration of Baltic mean sea-level rise, 1900-2012
Directory of Open Access Journals (Sweden)
Birgit Hünicke
2016-07-01
Full Text Available We analyse annual mean sea-level records from tide-gauges located in the Baltic and parts of the North Sea with the aim of detecting an acceleration of sea-level rise over the 20textsuperscript{th} and 21textsuperscript{st} centuries. The acceleration is estimated as a (1 fit to a polynomial of order two in time, (2 a long-term linear increase in the rates computed over gliding overlapping decadal time segments, and (3 a long-term increase of the annual increments of sea level.The estimation methods (1 and (2 prove to be more powerful in detecting acceleration when tested with sea-level records produced in global climate model simulations. These methods applied to the Baltic-Sea tide-gauges are, however, not powerful enough to detect a significant acceleration in most of individual records, although most estimated accelerations are positive. This lack of detection of statistically significant acceleration at the individual tide-gauge level can be due to the high-level of local noise and not necessarily to the absence of acceleration.The estimated accelerations tend to be stronger in the north and east of the Baltic Sea. Two hypothesis to explain this spatial pattern have been explored. One is that this pattern reflects the slow-down of the Glacial Isostatic Adjustment. However, a simple estimation of this effect suggests that this slow-down cannot explain the estimated acceleration. The second hypothesis is related to the diminishing sea-ice cover over the 20textsuperscript{th} century. The melting o of less saline and colder sea-ice can lead to changes in sea-level. Also, the melting of sea-ice can reduce the number of missing values in the tide-gauge records in winter, potentially influencing the estimated trends and acceleration of seasonal mean sea-level This hypothesis cannot be ascertained either since the spatial pattern of acceleration computed for winter and summer separately are very similar. The all-station-average-record displays an
Hybrid statistics-simulations based method for atom-counting from ADF STEM images
Energy Technology Data Exchange (ETDEWEB)
De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)
2017-06-15
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.
Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming
2017-12-01
Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and
Introduction to probability and statistics for ecosystem managers simulation and resampling
Haas, Timothy C
2013-01-01
Explores computer-intensive probability and statistics for ecosystem management decision making Simulation is an accessible way to explain probability and stochastic model behavior to beginners. This book introduces probability and statistics to future and practicing ecosystem managers by providing a comprehensive treatment of these two areas. The author presents a self-contained introduction for individuals involved in monitoring, assessing, and managing ecosystems and features intuitive, simulation-based explanations of probabilistic and statistical concepts. Mathematical programming details are provided for estimating ecosystem model parameters with Minimum Distance, a robust and computer-intensive method. The majority of examples illustrate how probability and statistics can be applied to ecosystem management challenges. There are over 50 exercises - making this book suitable for a lecture course in a natural resource and/or wildlife management department, or as the main text in a program of self-stud...
Level tracking in detailed reactor simulations
Energy Technology Data Exchange (ETDEWEB)
Aktas, B.; Mahaffy, J.H. [Pennsylvania State Univ., University Park, PA (United States)
1995-09-01
We introduce a useful test problem for judging the performance of reactor safety codes in situations where moving two-phase mixture levels are present. The test problem tracks a two-phase liquid level as it rises and then falls back to its original position. Pure air exists above the level, and a low void air-water mixture is below the level. Conditions are subcooled and isothermal to remove complications resulting from failures of interfacial heat transfer packages to properly account for the level. Comparisons are made between the performance of current versions of CATHARE, RELAP5, TRAC-BF1, and TRAC-PF1. These system codes are based on finite-difference methods with a fixed, Eulerian staggered grid in space. When a partially filled cell with a mixture level discontinuity becomes the donor cell, the sharp changes in fluid properties across the interface results in numerical oscillations of various terms. Furthermore, the cell-to-cell convection of mass, momentum and energy are inaccurately predicted nearby a mixture level. To adequately model moving mixture levels, an efficient discontinuity tracking method for the finite-difference Eulerian approximations is described. This model had been implemented in the TRAC-BWR code for the two-phase mixture level tracking since the TRAC-BD1 Version (released April 1984). The result of the test problem run by the current version of TRAC-BF1/MOD1 with the mixture level tracking model shows some peculiar behavior of the variables such as velocities, pressures and interfacial terms. A systematic approach to improving performance of the tracking method is described. Implementing this approach in TRAC-BF1/MOD1 has shown a major improvement in the results.
Research on cloud background infrared radiation simulation based on fractal and statistical data
Liu, Xingrun; Xu, Qingshan; Li, Xia; Wu, Kaifeng; Dong, Yanbing
2018-02-01
Cloud is an important natural phenomenon, and its radiation causes serious interference to infrared detector. Based on fractal and statistical data, a method is proposed to realize cloud background simulation, and cloud infrared radiation data field is assigned using satellite radiation data of cloud. A cloud infrared radiation simulation model is established using matlab, and it can generate cloud background infrared images for different cloud types (low cloud, middle cloud, and high cloud) in different months, bands and sensor zenith angles.
Accelerating simulation for the multiple-point statistics algorithm using vector quantization
Zuo, Chen; Pan, Zhibin; Liang, Hao
2018-03-01
Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.
A study of statistics anxiety levels of graduate dental hygiene students.
Welch, Paul S; Jacks, Mary E; Smiley, Lynn A; Walden, Carolyn E; Clark, William D; Nguyen, Carol A
2015-02-01
In light of increased emphasis on evidence-based practice in the profession of dental hygiene, it is important that today's dental hygienist comprehend statistical measures to fully understand research articles, and thereby apply scientific evidence to practice. Therefore, the purpose of this study was to investigate statistics anxiety among graduate dental hygiene students in the U.S. A web-based self-report, anonymous survey was emailed to directors of 17 MSDH programs in the U.S. with a request to distribute to graduate students. The survey collected data on statistics anxiety, sociodemographic characteristics and evidence-based practice. Statistic anxiety was assessed using the Statistical Anxiety Rating Scale. Study significance level was α=0.05. Only 8 of the 17 invited programs participated in the study. Statistical Anxiety Rating Scale data revealed graduate dental hygiene students experience low to moderate levels of statistics anxiety. Specifically, the level of anxiety on the Interpretation Anxiety factor indicated this population could struggle with making sense of scientific research. A decisive majority (92%) of students indicated statistics is essential for evidence-based practice and should be a required course for all dental hygienists. This study served to identify statistics anxiety in a previously unexplored population. The findings should be useful in both theory building and in practical applications. Furthermore, the results can be used to direct future research. Copyright © 2015 The American Dental Hygienists’ Association.
Huang, Ching-Hsu
2014-01-01
The class quasi-experiment was conducted to determine whether using computer simulation teaching strategy enhanced student understanding of statistics concepts for students enrolled in an introductory course. One hundred and ninety-three sophomores in hospitality management department were invited as participants in this two-year longitudinal…
Comparing simulated and theoretical sampling distributions of the U3 person-fit statistic
Emons, W.H.M.; Meijer, R.R.; Sijtsma, K.
2002-01-01
The accuracy with which the theoretical sampling distribution of van der Flier's person-.t statistic U3 approaches the empirical U3 sampling distribution is affected by the item discrimination. A simulation study showed that for tests with a moderate or a strong mean item discrimination, the Type I
Comparing simulated and theoretical sampling distributions of the U3 person-fit statistic
Emons, Wilco H.M.; Meijer, R.R.; Sijtsma, Klaas
2002-01-01
The accuracy with which the theoretical sampling distribution of van der Flier’s person-fit statistic U3 approaches the empirical U3 sampling distribution is affected by the item discrimination. A simulation study showed that for tests with a moderate or a strong mean item discrimination, the Type I
The Effect of Project Based Learning on the Statistical Literacy Levels of Student 8th Grade
Koparan, Timur; Güven, Bülent
2014-01-01
This study examines the effect of project based learning on 8th grade students' statistical literacy levels. A performance test was developed for this aim. Quasi-experimental research model was used in this article. In this context, the statistics were taught with traditional method in the control group and it was taught using project based…
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Simulation approaches to probabilistic structural design at the component level
International Nuclear Information System (INIS)
Stancampiano, P.A.
1978-01-01
In this paper, structural failure of large nuclear components is viewed as a random process with a low probability of occurrence. Therefore, a statistical interpretation of probability does not apply and statistical inferences cannot be made due to the sparcity of actual structural failure data. In such cases, analytical estimates of the failure probabilities may be obtained from stress-strength interference theory. Since the majority of real design applications are complex, numerical methods are required to obtain solutions. Monte Carlo simulation appears to be the best general numerical approach. However, meaningful applications of simulation methods suggest research activities in three categories: methods development, failure mode models development, and statistical data models development. (Auth.)
Awi; Ahmar, A. S.; Rahman, A.; Minggi, I.; Mulbar, U.; Asdar; Ruslan; Upu, H.; Alimuddin; Hamda; Rosidah; Sutamrin; Tiro, M. A.; Rusli
2018-01-01
This research aims to reveal the profile about the level of creativity and the ability to propose statistical problem of students at Mathematics Education 2014 Batch in the State University of Makassar in terms of their cognitive style. This research uses explorative qualitative method by giving meta-cognitive scaffolding at the time of research. The hypothesis of research is that students who have field independent (FI) cognitive style in statistics problem posing from the provided information already able to propose the statistical problem that can be solved and create new data and the problem is already been included as a high quality statistical problem, while students who have dependent cognitive field (FD) commonly are still limited in statistics problem posing that can be finished and do not load new data and the problem is included as medium quality statistical problem.
Simulation on a car interior aerodynamic noise control based on statistical energy analysis
Chen, Xin; Wang, Dengfeng; Ma, Zhengdong
2012-09-01
How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.
An accurate behavioral model for single-photon avalanche diode statistical performance simulation
Xu, Yue; Zhao, Tingchen; Li, Ding
2018-01-01
An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.
International Nuclear Information System (INIS)
Chingangbam, Pravabati; Park, Changbom
2009-01-01
We simulate CMB maps including non-Gaussianity arising from cubic order perturbations of the primordial gravitational potential, characterized by the non-linearity parameter g NL . The maps are used to study the characteristic nature of the resulting non-Gaussian temperature fluctuations. We measure the genus and investigate how it deviates from Gaussian shape as a function of g NL and smoothing scale. We find that the deviation of the non-Gaussian genus curve from the Gaussian one has an antisymmetric, sine function like shape, implying more hot and more cold spots for g NL > 0 and less of both for g NL NL and also exhibits mild increase as the smoothing scale increases. We further study other statistics derived from the genus, namely, the number of hot spots, the number of cold spots, combined number of hot and cold spots and the slope of the genus curve at mean temperature fluctuation. We find that these observables carry signatures of g NL that are clearly distinct from the quadratic order perturbations, encoded in the parameter f NL . Hence they can be very useful tools for distinguishing not only between non-Gaussian temperature fluctuations and Gaussian ones but also between g NL and f NL type non-Gaussianities
Halo statistics analysis within medium volume cosmological N-body simulation
Directory of Open Access Journals (Sweden)
Martinović N.
2015-01-01
Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations
Low-Level Radioactive Waste siting simulation information package
International Nuclear Information System (INIS)
1985-12-01
The Department of Energy's National Low-Level Radioactive Waste Management Program has developed a simulation exercise designed to facilitate the process of siting and licensing disposal facilities for low-level radioactive waste. The siting simulation can be conducted at a workshop or conference, can involve 14-70 participants (or more), and requires approximately eight hours to complete. The exercise is available for use by states, regional compacts, or other organizations for use as part of the planning process for low-level waste disposal facilities. This information package describes the development, content, and use of the Low-Level Radioactive Waste Siting Simulation. Information is provided on how to organize a workshop for conducting the simulation. 1 ref., 1 fig
Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects
Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian;
2015-01-01
Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex
A study of the feasibility of statistical analysis of airport performance simulation
Myers, R. H.
1982-01-01
The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.
Algorithm for statistical noise reduction in three-dimensional ion implant simulations
International Nuclear Information System (INIS)
Hernandez-Mangas, J.M.; Arias, J.; Jaraiz, M.; Bailon, L.; Barbolla, J.
2001-01-01
As integrated circuit devices scale into the deep sub-micron regime, ion implantation will continue to be the primary means of introducing dopant atoms into silicon. Different types of impurity profiles such as ultra-shallow profiles and retrograde profiles are necessary for deep submicron devices in order to realize the desired device performance. A new algorithm to reduce the statistical noise in three-dimensional ion implant simulations both in the lateral and shallow/deep regions of the profile is presented. The computational effort in BCA Monte Carlo ion implant simulation is also reduced
Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.
Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S
2018-02-21
Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from
A random matrix approach to the crossover of energy-level statistics from Wigner to Poisson
International Nuclear Information System (INIS)
Datta, Nilanjana; Kunz, Herve
2004-01-01
We analyze a class of parametrized random matrix models, introduced by Rosenzweig and Porter, which is expected to describe the energy level statistics of quantum systems whose classical dynamics varies from regular to chaotic as a function of a parameter. We compute the generating function for the correlations of energy levels, in the limit of infinite matrix size. The crossover between Poisson and Wigner statistics is measured by a renormalized coupling constant. The model is exactly solved in the sense that, in the limit of infinite matrix size, the energy-level correlation functions and their generating function are given in terms of a finite set of integrals
Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method
International Nuclear Information System (INIS)
Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum
2011-01-01
In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used
PhyloSim - Monte Carlo simulation of sequence evolution in the R statistical computing environment
Directory of Open Access Journals (Sweden)
Massingham Tim
2011-04-01
Full Text Available Abstract Background The Monte Carlo simulation of sequence evolution is routinely used to assess the performance of phylogenetic inference methods and sequence alignment algorithms. Progress in the field of molecular evolution fuels the need for more realistic and hence more complex simulations, adapted to particular situations, yet current software makes unreasonable assumptions such as homogeneous substitution dynamics or a uniform distribution of indels across the simulated sequences. This calls for an extensible simulation framework written in a high-level functional language, offering new functionality and making it easy to incorporate further complexity. Results PhyloSim is an extensible framework for the Monte Carlo simulation of sequence evolution, written in R, using the Gillespie algorithm to integrate the actions of many concurrent processes such as substitutions, insertions and deletions. Uniquely among sequence simulation tools, PhyloSim can simulate arbitrarily complex patterns of rate variation and multiple indel processes, and allows for the incorporation of selective constraints on indel events. User-defined complex patterns of mutation and selection can be easily integrated into simulations, allowing PhyloSim to be adapted to specific needs. Conclusions Close integration with R and the wide range of features implemented offer unmatched flexibility, making it possible to simulate sequence evolution under a wide range of realistic settings. We believe that PhyloSim will be useful to future studies involving simulated alignments.
Statistical characterization of global Sea Surface Salinity for SMOS level 3 and 4 products
Gourrion, J.; Aretxabaleta, A. L.; Ballabrera, J.; Mourre, B.
2009-04-01
The Soil Moisture and Ocean Salinity (SMOS) mission of the European Space Agency will soon provide sea surface salinity (SSS) estimates to the scientific community. Because of the numerous geophysical contamination sources and the instrument complexity, the salinity products will have a low signal to noise ratio at level 2 (individual estimates??) that is expected to increase up to mission requirements (0.1 psu) at level 3 (global maps with regular distribution) after spatio-temporal accumulation of the observations. Geostatistical methods such as Optimal Interpolation are being implemented at the level 3/4 production centers to operate this noise reduction step. The methodologies require auxiliary information about SSS statistics that, under Gaussian assumption, consist in the mean field and the covariance of the departures from it. The present study is a contribution to the definition of the best estimates for mean field and covariances to be used in the near-future SMOS level 3 and 4 products. We use complementary information from sparse in-situ observations and imperfect outputs from state-of-art model simulations. Various estimates of the mean field are compared. An alternative is the use of a SSS climatology such as the one provided by the World Ocean Atlas 2005. An historical SSS dataset from the World Ocean Database 2005 is reanalyzed and combined with the recent global observations obtained by the Array for Real-Time Geostrophic Oceanography (ARGO). Regional tendencies in the long-term temporal evolution of the near-surface ocean salinity are evident, suggesting that the use of a SSS climatology to describe the current mean field may introduce biases of magnitude similar to the precision goal. Consequently, a recent SSS dataset may be preferred to define the mean field needed for SMOS level 3 and 4 production. The in-situ observation network allows a global mapping of the low frequency component of the variability, i.e. decadal, interannual and seasonal
Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut
2015-01-01
The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Statistical Models to Assess the Health Effects and to Forecast Ground Level Ozone
Czech Academy of Sciences Publication Activity Database
Schlink, U.; Herbath, O.; Richter, M.; Dorling, S.; Nunnari, G.; Cawley, G.; Pelikán, Emil
2006-01-01
Roč. 21, č. 4 (2006), s. 547-558 ISSN 1364-8152 R&D Projects: GA AV ČR 1ET400300414 Institutional research plan: CEZ:AV0Z10300504 Keywords : statistical models * ground level ozone * health effects * logistic model * forecasting * prediction performance * neural network * generalised additive model * integrated assessment Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.992, year: 2006
Applying Statistical Design to Control the Risk of Over-Design with Stochastic Simulation
Directory of Open Access Journals (Sweden)
Yi Wu
2010-02-01
Full Text Available By comparing a hard real-time system and a soft real-time system, this article elicits the risk of over-design in soft real-time system designing. To deal with this risk, a novel concept of statistical design is proposed. The statistical design is the process accurately accounting for and mitigating the effects of variation in part geometry and other environmental conditions, while at the same time optimizing a target performance factor. However, statistical design can be a very difficult and complex task when using clas-sical mathematical methods. Thus, a simulation methodology to optimize the design is proposed in order to bridge the gap between real-time analysis and optimization for robust and reliable system design.
Kolokythas, Kostantinos; Vasileios, Salamalikis; Athanassios, Argiriou; Kazantzidis, Andreas
2015-04-01
The wind is a result of complex interactions of numerous mechanisms taking place in small or large scales, so, the better knowledge of its behavior is essential in a variety of applications, especially in the field of power production coming from wind turbines. In the literature there is a considerable number of models, either physical or statistical ones, dealing with the problem of simulation and prediction of wind speed. Among others, Artificial Neural Networks (ANNs) are widely used for the purpose of wind forecasting and, in the great majority of cases, outperform other conventional statistical models. In this study, a number of ANNs with different architectures, which have been created and applied in a dataset of wind time series, are compared to Auto Regressive Integrated Moving Average (ARIMA) statistical models. The data consist of mean hourly wind speeds coming from a wind farm on a hilly Greek region and cover a period of one year (2013). The main goal is to evaluate the models ability to simulate successfully the wind speed at a significant point (target). Goodness-of-fit statistics are performed for the comparison of the different methods. In general, the ANN showed the best performance in the estimation of wind speed prevailing over the ARIMA models.
Energy Technology Data Exchange (ETDEWEB)
Paik, Joongcheol [University of Minnesota; Sotiropoulos, Fotis [University of Minnesota; Sale, Michael J [ORNL
2005-06-01
A numerical method is developed for carrying out unsteady Reynolds-averaged Navier-Stokes (URANS) simulations and detached-eddy simulations (DESs) in complex 3D geometries. The method is applied to simulate incompressible swirling flow in a typical hydroturbine draft tube, which consists of a strongly curved 90 degree elbow and two piers. The governing equations are solved with a second-order-accurate, finite-volume, dual-time-stepping artificial compressibility approach for a Reynolds number of 1.1 million on a mesh with 1.8 million nodes. The geometrical complexities of the draft tube are handled using domain decomposition with overset (chimera) grids. Numerical simulations show that unsteady statistical turbulence models can capture very complex 3D flow phenomena dominated by geometry-induced, large-scale instabilities and unsteady coherent structures such as the onset of vortex breakdown and the formation of the unsteady rope vortex downstream of the turbine runner. Both URANS and DES appear to yield the general shape and magnitude of mean velocity profiles in reasonable agreement with measurements. Significant discrepancies among the DES and URANS predictions of the turbulence statistics are also observed in the straight downstream diffuser.
On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics
Directory of Open Access Journals (Sweden)
Marco Aldinucci
2014-01-01
Full Text Available The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.
On designing multicore-aware simulators for systems biology endowed with OnLine statistics.
Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.
Simulations and cosmological inference: A statistical model for power spectra means and covariances
International Nuclear Information System (INIS)
Schneider, Michael D.; Knox, Lloyd; Habib, Salman; Heitmann, Katrin; Higdon, David; Nakhleh, Charles
2008-01-01
We describe an approximate statistical model for the sample variance distribution of the nonlinear matter power spectrum that can be calibrated from limited numbers of simulations. Our model retains the common assumption of a multivariate normal distribution for the power spectrum band powers but takes full account of the (parameter-dependent) power spectrum covariance. The model is calibrated using an extension of the framework in Habib et al. (2007) to train Gaussian processes for the power spectrum mean and covariance given a set of simulation runs over a hypercube in parameter space. We demonstrate the performance of this machinery by estimating the parameters of a power-law model for the power spectrum. Within this framework, our calibrated sample variance distribution is robust to errors in the estimated covariance and shows rapid convergence of the posterior parameter constraints with the number of training simulations.
Directory of Open Access Journals (Sweden)
Vahid Reza Jalali
2017-10-01
Full Text Available Introduction Salinity as an abiotic stress can cause excessive disturbance for seed germination and plant sustainable production. Salinity with three different mechanisms of osmotic potential reduction, ionic toxicity and disturbance of plant nutritional balance, can reduce performance of the final product. Planning for optimal use of available water and saline water with poor quality in agricultural activities is of great importance. Wheat is one of the eight main food sources including rice, corn, sugar beet, cattle, sorghum, millet and cassava which provide 70-90% of all calories and 66-90% of the protein consumed in developing countries. Durum wheat (Triticum turgidum L. is an important crop grows in some arid and semi-arid areas of the world such as Middle East and North Africa. In these regions, in addition to soil salinity, sharp decline in rainfall and a sharp drop in groundwater levels in recent years has emphasized on the efficient use of limited soil and water resources. Consequently, in order to use brackish water for agricultural productions, it is required to analyze its quantitative response to salinity stress by simulation models in those regions. The objective of this study is to assess the capability of statistics and macro-simulation models of yield in saline conditions. Materials and methods In this study, two general approach of simulation includes process-physical models and statistical-experimental models were investigated. For this purpose, in order to quantify the salinity effect on seed relative yield of durum wheat (Behrang Variety at different levels of soil salinity, process-physical models of Maas & Hoffman, van Genuchten & Hoffman, Dirksen et al. and Homaee et al. models were used. Also, statistical-experimental models of Modified Gompertz Function, Bi-Exponential Function and Modified Weibull Function were used too. In order to get closer to real conditions of growth circumstances in saline soils, a natural saline
Energy Technology Data Exchange (ETDEWEB)
Reichert, B.K.; Bengtsson, L. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany); Aakesson, O. [Sveriges Meteorologiska och Hydrologiska Inst., Norrkoeping (Sweden)
1998-08-01
Recent proxy data obtained from ice core measurements, dendrochronology and valley glaciers provide important information on the evolution of the regional or local climate. General circulation models integrated over a long period of time could help to understand the (external and internal) forcing mechanisms of natural climate variability. For a systematic interpretation of in situ paleo proxy records, a combined method of dynamical and statistical modeling is proposed. Local 'paleo records' can be simulated from GCM output by first undertaking a model-consistent statistical downscaling and then using a process-based forward modeling approach to obtain the behavior of valley glaciers and the growth of trees under specific conditions. The simulated records can be compared to actual proxy records in order to investigate whether e.g. the response of glaciers to climatic change can be reproduced by models and to what extent climate variability obtained from proxy records (with the main focus on the last millennium) can be represented. For statistical downscaling to local weather conditions, a multiple linear forward regression model is used. Daily sets of observed weather station data and various large-scale predictors at 7 pressure levels obtained from ECMWF reanalyses are used for development of the model. Daily data give the closest and most robust relationships due to the strong dependence on individual synoptic-scale patterns. For some local variables, the performance of the model can be further increased by developing seasonal specific statistical relationships. The model is validated using both independent and restricted predictor data sets. The model is applied to a long integration of a mixed layer GCM experiment simulating pre-industrial climate variability. The dynamical-statistical local GCM output within a region around Nigardsbreen glacier, Norway is compared to nearby observed station data for the period 1868-1993. Patterns of observed
Simulating European wind power generation applying statistical downscaling to reanalysis data
International Nuclear Information System (INIS)
González-Aparicio, I.; Monforti, F.; Volker, P.; Zucker, A.; Careri, F.; Huld, T.; Badger, J.
2017-01-01
Highlights: •Wind speed spatial resolution highly influences calculated wind power peaks and ramps. •Reduction of wind power generation uncertainties using statistical downscaling. •Publicly available dataset of wind power generation hourly time series at NUTS2. -- Abstract: The growing share of electricity production from solar and mainly wind resources constantly increases the stochastic nature of the power system. Modelling the high share of renewable energy sources – and in particular wind power – crucially depends on the adequate representation of the intermittency and characteristics of the wind resource which is related to the accuracy of the approach in converting wind speed data into power values. One of the main factors contributing to the uncertainty in these conversion methods is the selection of the spatial resolution. Although numerical weather prediction models can simulate wind speeds at higher spatial resolution (up to 1 × 1 km) than a reanalysis (generally, ranging from about 25 km to 70 km), they require high computational resources and massive storage systems: therefore, the most common alternative is to use the reanalysis data. However, local wind features could not be captured by the use of a reanalysis technique and could be translated into misinterpretations of the wind power peaks, ramping capacities, the behaviour of power prices, as well as bidding strategies for the electricity market. This study contributes to the understanding what is captured by different wind speeds spatial resolution datasets, the importance of using high resolution data for the conversion into power and the implications in power system analyses. It is proposed a methodology to increase the spatial resolution from a reanalysis. This study presents an open access renewable generation time series dataset for the EU-28 and neighbouring countries at hourly intervals and at different geographical aggregation levels (country, bidding zone and administrative
Siting simulation for low-level waste disposal facilities
International Nuclear Information System (INIS)
Roop, R.D.; Rope, R.C.
1985-01-01
The Mock Site Licensing Demonstration Project has developed the Low-Level Radioactive Waste Siting Simulation, a role-playing exercise designed to facilitate the process of siting and licensing disposal facilities for low-level waste (LLW). This paper describes the development, content, and usefulness of the siting simulation. The simulation can be conducted at a workshop or conference, involves 14 or more participants, and requires about eight hours to complete. The simulation consists of two sessions; in the first, participants negotiate the selection of siting criteria, and in the second, a preferred disposal site is chosen from three candidate sites. The project has sponsored two workshops (in Boston, Massachusetts and Richmond, Virginia) in which the simulation has been conducted for persons concerned with LLW management issues. It is concluded that the simulation can be valuable as a tool for disseminating information about LLW management; a vehicle that can foster communication; and a step toward consensus building and conflict resolution. The DOE National Low-Level Waste Management Program is now making the siting simulation available for use by states, regional compacts, and other organizations involved in development of LLW disposal facilities
International Nuclear Information System (INIS)
Silva-Rodríguez, Jesús; Domínguez-Prado, Inés; Pardo-Montero, Juan; Ruibal, Álvaro
2017-01-01
Purpose: The aim of this work is to study the effect of physiological muscular uptake variations and statistical noise on tumor quantification in FDG-PET studies. Methods: We designed a realistic framework based on simulated FDG-PET acquisitions from an anthropomorphic phantom that included different muscular uptake levels and three spherical lung lesions with diameters of 31, 21 and 9 mm. A distribution of muscular uptake levels was obtained from 136 patients remitted to our center for whole-body FDG-PET. Simulated FDG-PET acquisitions were obtained by using the Simulation System for Emission Tomography package (SimSET) Monte Carlo package. Simulated data was reconstructed by using an iterative Ordered Subset Expectation Maximization (OSEM) algorithm implemented in the Software for Tomographic Image Reconstruction (STIR) library. Tumor quantification was carried out by using estimations of SUV max , SUV 50 and SUV mean from different noise realizations, lung lesions and multiple muscular uptakes. Results: Our analysis provided quantification variability values of 17–22% (SUV max ), 11–19% (SUV 50 ) and 8–10% (SUV mean ) when muscular uptake variations and statistical noise were included. Meanwhile, quantification variability due only to statistical noise was 7–8% (SUV max ), 3–7% (SUV 50 ) and 1–2% (SUV mean ) for large tumors (>20 mm) and 13% (SUV max ), 16% (SUV 50 ) and 8% (SUV mean ) for small tumors (<10 mm), thus showing that the variability in tumor quantification is mainly affected by muscular uptake variations when large enough tumors are considered. In addition, our results showed that quantification variability is strongly dominated by statistical noise when the injected dose decreases below 222 MBq. Conclusions: Our study revealed that muscular uptake variations between patients who are totally relaxed should be considered as an uncertainty source of tumor quantification values. - Highlights: • Distribution of muscular uptake from 136 PET
Statistical selection of tide gauges for Arctic sea-level reconstruction
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2015-01-01
In this paper, we seek an appropriate selection of tide gauges for Arctic Ocean sea-level reconstruction based on a combination of empirical criteria and statistical properties (leverages). Tide gauges provide the only in situ observations of sea level prior to the altimetry era. However, tide...... the "influence" of each Arctic tide gauge on the EOF-based reconstruction through the use of statistical leverage and use this as an indication in selecting appropriate tide gauges, in order to procedurally identify poor-quality data while still including as much data as possible. To accommodate sparse...
A path-level exact parallelization strategy for sequential simulation
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Labelle, J.; Noonan, K.
2006-12-01
Despite their remote location, radio receivers at South Pole Station regularly detect AM broadcast band signals propagating from transmitters thousands of kilometers away. Statistical analysis of received radiowave power at South Pole during 2004 and 2005, integrated over the frequency range of AM broadcast stations, reveals a distinctive time-of-day (UT) dependence: a broad maximum in received power centered at 1500 UT corresponds to magnetic daytime; signal levels are lower during magnetic nighttime except for a calculated based on two contributions: daytime D-region absorption and auroral absorption. The latter varies with day of year and magnetic local time in a complex fashion due to the asymmetric shape and varying size of the auroral oval and the offset of South Pole from the geomagnetic pole. The Monte Carlo simulations confirm that the enhanced absorption of AM broadcast signals during magnetic nighttime results from auroral absorption. Furthermore, the simulations predict that a weak (<0.5 dB) peak near magnetic midnight, similar to that observed in the data, arises from including in the statistical data base intervals when the auroral oval is contracted. These results suggest that ground based radio observations at a sufficiently remote high-latitude site such as South Pole may effectively monitor auroral oval characteristics on a statistical basis at least.
Neumann, David L.; Neumann, Michelle M.; Hood, Michelle
2011-01-01
The discipline of statistics seems well suited to the integration of technology in a lecture as a means to enhance student learning and engagement. Technology can be used to simulate statistical concepts, create interactive learning exercises, and illustrate real world applications of statistics. The present study aimed to better understand the…
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
International Nuclear Information System (INIS)
Do-Kun Yoon; Joo-Young Jung; Tae Suk Suh; Seong-Min Han
2015-01-01
The purpose of this research is a statistical analysis for discrimination of prompt gamma ray peak induced by the 14.1 MeV neutron particles from spectra using Monte Carlo simulation. For the simulation, the information of 18 detector materials was used to simulate spectra by the neutron capture reaction. The discrimination of nine prompt gamma ray peaks from the simulation of each detector material was performed. We presented the several comparison indexes of energy resolution performance depending on the detector material using the simulation and statistics for the prompt gamma activation analysis. (author)
Koparan, Timur; Güven, Bülent
2015-01-01
The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35…
Shukla, Pragya
2004-01-01
We find that the statistics of levels undergoing metal-insulator transition in systems with multi-parametric Gaussian disorders and non-interacting electrons behaves in a way similar to that of the single parametric Brownian ensembles \\cite{dy}. The latter appear during a Poisson $\\to$ Wigner-Dyson transition, driven by a random perturbation. The analogy provides the analytical evidence for the single parameter scaling of the level-correlations in disordered systems as well as a tool to obtai...
Alternative interpretations of statistics on health effects of low-level radiation
International Nuclear Information System (INIS)
Hamilton, L.D.
1983-01-01
Four examples of the interpretation of statistics of data on low-level radiation are reviewed: (a) genetic effects of the atomic bombs at Hiroshima and Nagasaki, (b) cancer at Rocky Flats, (c) childhood leukemia and fallout in Utah, and (d) cancer among workers at the Portsmouth Naval Shipyard. Aggregation of data, adjustment for age, and other problems related to the determination of health effects of low-level radiation are discussed. Troublesome issues related to post hoc analysis are considered
Statistical methods for elimination of guarantee-time bias in cohort studies: a simulation study
Directory of Open Access Journals (Sweden)
In Sung Cho
2017-08-01
Full Text Available Abstract Background Aspirin has been considered to be beneficial in preventing cardiovascular diseases and cancer. Several pharmaco-epidemiology cohort studies have shown protective effects of aspirin on diseases using various statistical methods, with the Cox regression model being the most commonly used approach. However, there are some inherent limitations to the conventional Cox regression approach such as guarantee-time bias, resulting in an overestimation of the drug effect. To overcome such limitations, alternative approaches, such as the time-dependent Cox model and landmark methods have been proposed. This study aimed to compare the performance of three methods: Cox regression, time-dependent Cox model and landmark method with different landmark times in order to address the problem of guarantee-time bias. Methods Through statistical modeling and simulation studies, the performance of the above three methods were assessed in terms of type I error, bias, power, and mean squared error (MSE. In addition, the three statistical approaches were applied to a real data example from the Korean National Health Insurance Database. Effect of cumulative rosiglitazone dose on the risk of hepatocellular carcinoma was used as an example for illustration. Results In the simulated data, time-dependent Cox regression outperformed the landmark method in terms of bias and mean squared error but the type I error rates were similar. The results from real-data example showed the same patterns as the simulation findings. Conclusions While both time-dependent Cox regression model and landmark analysis are useful in resolving the problem of guarantee-time bias, time-dependent Cox regression is the most appropriate method for analyzing cumulative dose effects in pharmaco-epidemiological studies.
CFD simulation and statistical analysis of moisture transfer into an electronic enclosure
DEFF Research Database (Denmark)
Shojaee Nasirabadi, Parizad; Jabbaribehnam, Mirmasoud; Hattel, Jesper Henri
2017-01-01
CFD model for the isothermal case. The model is then combined with a two level factorial design to identify the significant factors as well as the potential interactions us- ing the numerical simulation results. In the second part of this study, a non-isothermal case is studied, in which the enclosure...
Energy Technology Data Exchange (ETDEWEB)
A.G. Crook Company
1993-04-01
This report was prepared by the A.G. Crook Company, under contract to Bonneville Power Administration, and provides statistics of seasonal volumes and streamflow for 28 selected sites in the Columbia River Basin.
Statistics of Deep Convection in the Congo Basin Derived From High-Resolution Simulations.
White, B.; Stier, P.; Kipling, Z.; Gryspeerdt, E.; Taylor, S.
2016-12-01
Convection transports moisture, momentum, heat and aerosols through the troposphere, and so the temporal variability of convection is a major driver of global weather and climate. The Congo basin is home to some of the most intense convective activity on the planet and is under strong seasonal influence of biomass burning aerosol. However, deep convection in the Congo basin remains under studied compared to other regions of tropical storm systems, especially when compared to the neighbouring, relatively well-understood West African climate system. We use the WRF model to perform a high-resolution, cloud-system resolving simulation to investigate convective storm systems in the Congo. Our setup pushes the boundaries of current computational resources, using a 1 km grid length over a domain covering millions of square kilometres and for a time period of one month. This allows us to draw statistical conclusions on the nature of the simulated storm systems. Comparing data from satellite observations and the model enables us to quantify the diurnal variability of deep convection in the Congo basin. This approach allows us to evaluate our simulations despite the lack of in-situ observational data. This provides a more comprehensive analysis of the diurnal cycle than has previously been shown. Further, we show that high-resolution convection-permitting simulations performed over near-seasonal timescales can be used in conjunction with satellite observations as an effective tool to evaluate new convection parameterisations.
Discrete event simulation of the ATLAS second level trigger
International Nuclear Information System (INIS)
Vermeulen, J.C.; Dankers, R.J.; Hunt, S.; Harris, F.; Hortnagl, C.; Erasov, A.; Bogaerts, A.
1998-01-01
Discrete event simulation is applied for determining the computing and networking resources needed for the ATLAS second level trigger. This paper discusses the techniques used and some of the results obtained so far for well defined laboratory configurations and for the full system
A Note on Comparing the Power of Test Statistics at Low Significance Levels.
Morris, Nathan; Elston, Robert
2011-01-01
It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.
Drought episodes over Greece as simulated by dynamical and statistical downscaling approaches
Anagnostopoulou, Christina
2017-07-01
Drought over the Greek region is characterized by a strong seasonal cycle and large spatial variability. Dry spells longer than 10 consecutive days mainly characterize the duration and the intensity of Greek drought. Moreover, an increasing trend of the frequency of drought episodes has been observed, especially during the last 20 years of the 20th century. Moreover, the most recent regional circulation models (RCMs) present discrepancies compared to observed precipitation, while they are able to reproduce the main patterns of atmospheric circulation. In this study, both a statistical and a dynamical downscaling approach are used to quantify drought episodes over Greece by simulating the Standardized Precipitation Index (SPI) for different time steps (3, 6, and 12 months). A statistical downscaling technique based on artificial neural network is employed for the estimation of SPI over Greece, while this drought index is also estimated using the RCM precipitation for the time period of 1961-1990. Overall, it was found that the drought characteristics (intensity, duration, and spatial extent) were well reproduced by the regional climate models for long term drought indices (SPI12) while ANN simulations are better for the short-term drought indices (SPI3).
Large-eddy simulation in a mixing tee junction: High-order turbulent statistics analysis
International Nuclear Information System (INIS)
Howard, Richard J.A.; Serre, Eric
2015-01-01
Highlights: • Mixing and thermal fluctuations in a junction are studied using large eddy simulation. • Adiabatic and conducting steel wall boundaries are tested. • Wall thermal fluctuations are not the same between the flow and the solid. • Solid thermal fluctuations cannot be predicted from the fluid thermal fluctuations. • High-order turbulent statistics show that the turbulent transport term is important. - Abstract: This study analyses the mixing and thermal fluctuations induced in a mixing tee junction with circular cross-sections when cold water flowing in a pipe is joined by hot water from a branch pipe. This configuration is representative of industrial piping systems in which temperature fluctuations in the fluid may cause thermal fatigue damage on the walls. Implicit large-eddy simulations (LES) are performed for equal inflow rates corresponding to a bulk Reynolds number Re = 39,080. Two different thermal boundary conditions are studied for the pipe walls; an insulating adiabatic boundary and a conducting steel wall boundary. The predicted flow structures show a satisfactory agreement with the literature. The velocity and thermal fields (including high-order statistics) are not affected by the heat transfer with the steel walls. However, predicted thermal fluctuations at the boundary are not the same between the flow and the solid, showing that solid thermal fluctuations cannot be predicted by the knowledge of the fluid thermal fluctuations alone. The analysis of high-order turbulent statistics provides a better understanding of the turbulence features. In particular, the budgets of the turbulent kinetic energy and temperature variance allows a comparative analysis of dissipation, production and transport terms. It is found that the turbulent transport term is an important term that acts to balance the production. We therefore use a priori tests to evaluate three different models for the triple correlation
Learning Object Names at Different Hierarchical Levels Using Cross-Situational Statistics.
Chen, Chi-Hsin; Zhang, Yayun; Yu, Chen
2018-05-01
Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input. Copyright © 2017 Cognitive Science Society, Inc.
Detecting rater bias using a person-fit statistic: a Monte Carlo simulation study.
Aubin, André-Sébastien; St-Onge, Christina; Renaud, Jean-Sébastien
2018-04-01
With the Standards voicing concern for the appropriateness of response processes, we need to explore strategies that would allow us to identify inappropriate rater response processes. Although certain statistics can be used to help detect rater bias, their use is complicated by either a lack of data about their actual power to detect rater bias or the difficulty related to their application in the context of health professions education. This exploratory study aimed to establish the worthiness of pursuing the use of l z to detect rater bias. We conducted a Monte Carlo simulation study to investigate the power of a specific detection statistic, that is: the standardized likelihood l z person-fit statistics (PFS). Our primary outcome was the detection rate of biased raters, namely: raters whom we manipulated into being either stringent (giving lower scores) or lenient (giving higher scores), using the l z statistic while controlling for the number of biased raters in a sample (6 levels) and the rate of bias per rater (6 levels). Overall, stringent raters (M = 0.84, SD = 0.23) were easier to detect than lenient raters (M = 0.31, SD = 0.28). More biased raters were easier to detect then less biased raters (60% bias: 62, SD = 0.37; 10% bias: 43, SD = 0.36). The PFS l z seems to offer an interesting potential to identify biased raters. We observed detection rates as high as 90% for stringent raters, for whom we manipulated more than half their checklist. Although we observed very interesting results, we cannot generalize these results to the use of PFS with estimated item/station parameters or real data. Such studies should be conducted to assess the feasibility of using PFS to identify rater bias.
Full counting statistics of level renormalization in electron transport through double quantum dots
International Nuclear Information System (INIS)
Luo Junyan; Shen Yu; Cen Gang; He Xiaoling; Wang Changrong; Jiao Hujun
2011-01-01
We examine the full counting statistics of electron transport through double quantum dots coupled in series, with particular attention being paid to the unique features originating from level renormalization. It is clearly illustrated that the energy renormalization gives rise to a dynamic charge blockade mechanism, which eventually results in super-Poissonian noise. Coupling of the double dots to an external heat bath leads to dephasing and relaxation mechanisms, which are demonstrated to suppress the noise in a unique way.
DEFF Research Database (Denmark)
Pomogaev, Vladimir; Pomogaeva, Anna; Avramov, Pavel
2011-01-01
Three polycyclic organic molecules in various solvents focused on thermo-dynamical aspects were theoretically investigated using the recently developed statistical quantum mechanical/classical molecular dynamics method for simulating electronic-vibrational spectra. The absorption bands of estradiol...
Using statistical sensitivities for adaptation of a best-estimate thermo-hydraulic simulation model
International Nuclear Information System (INIS)
Liu, X.J.; Kerner, A.; Schaefer, A.
2010-01-01
On-line adaptation of best-estimate simulations of NPP behaviour to time-dependent measurement data can be used to insure that simulations performed in parallel to plant operation develop synchronously with the real plant behaviour even over extended periods of time. This opens a range of applications including operator support in non-standard-situations, improving diagnostics and validation of measurements in real plants or experimental facilities. A number of adaptation methods have been proposed and successfully applied to control problems. However, these methods are difficult to be applied to best-estimate thermal-hydraulic codes, such as TRACE and ATHLET, with their large nonlinear differential equation systems and sophisticated time integration techniques. This paper presents techniques to use statistical sensitivity measures to overcome those problems by reducing the number of parameters subject to adaptation. It describes how to identify the most significant parameters for adaptation and how this information can be used by combining: -decomposition techniques splitting the system into a small set of component parts with clearly defined interfaces where boundary conditions can be derived from the measurement data, -filtering techniques to insure that the time frame for adaptation is meaningful, -numerical sensitivities to find minimal error conditions. The suitability of combining those techniques is shown by application to an adaptive simulation of the PKL experiment.
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
Koparan, Timur; Güven, Bülent
2015-07-01
The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35 in the experimental group and 35 in the control group, took this test twice, one before the application and one after the application. All the raw scores were turned into linear points by using the Winsteps 3.72 modelling program that makes the Rasch analysis and t-tests, and an ANCOVA analysis was carried out with the linear points. Depending on the findings, it was concluded that the project-based learning approach increases students' level of statistical literacy for data representation. Students' levels of statistical literacy before and after the application were shown through the obtained person-item maps.
Digital Repository Service at National Institute of Oceanography (India)
Sindhu, B.; Unnikrishnan, A.S.
. The simulated total sea level and the surge component were obtained for each event. The simulated peak levels showed good agreement with the observations available at few stations. The annual maxima of sea levels, extracted from the simulations, were fitted...
Digital Repository Service at National Institute of Oceanography (India)
Pankajakshan, T.; Shikauchi, A; Sugimori, Y.; Kubota, M.
-T a and precipitable water. The rms errors of the SSMI-T a , in this case are found to be reduced to 1.0°C. 1. Introduction Satellite derived surface-level meteorological parameters are considered to be a better alternative to sparse ship... Vol. 49, pp. 551 to 558. 1993 A Statistical Method to Get Surface Level Air-Temperature from Satellite Observations of Precipitable Water PANKAJAKSHAN THADATHIL*, AKIRA SHIKAUCHI, YASUHIRO SUGIMORI and MASAHISA KUBOTA School of Marine Science...
Using the Δ3 statistic to test for missed levels in mixed sequence neutron resonance data
International Nuclear Information System (INIS)
Mulhall, Declan
2009-01-01
The Δ 3 (L) statistic is studied as a tool to detect missing levels in the neutron resonance data where two sequences are present. These systems are problematic because there is no level repulsion, and the resonances can be too close to resolve. Δ 3 (L) is a measure of the fluctuations in the number of levels in an interval of length L on the energy axis. The method used is tested on ensembles of mixed Gaussian orthogonal ensemble spectra, with a known fraction of levels (x%) randomly depleted, and can accurately return x. The accuracy of the method as a function of spectrum size is established. The method is used on neutron resonance data for 11 isotopes with either s-wave neutrons on odd-A isotopes, or p-wave neutrons on even-A isotopes. The method compares favorably with a maximum likelihood method applied to the level spacing distribution. Nuclear data ensembles were made from 20 isotopes in total, and their Δ 3 (L) statistics are discussed in the context of random matrix theory.
Simulation and Validation of the ATLAS Level-1 Topological Trigger
Bakker, Pepijn Johannes; The ATLAS collaboration
2017-01-01
The ATLAS experiment has recently commissioned a new component of its first-level trigger: the L1 topological trigger. This system, using state-of-the-art FPGA processors, makes it possible to reject events by applying topological requirements, such as kinematic criteria involving clusters, jets, muons, and total transverse energy. The data recorded using the L1Topological trigger demonstrates that this innovative trigger strategy allows for an improved rejection rate without efficiency loss. This improvement has been shown for several relevant physics processes leading to low-$p_T$ leptons, including $H\\to{}\\tau{}\\tau{}$ and $J/\\Psi\\to{}\\mu{}\\mu{}$. In addition, an accurate simulation of the L1Topological trigger is used to validate and optimize the performance of this trigger. To reach such an accuracy, this simulation must take into account the fact that the firmware algorithms are executed on a FPGA architecture, while the simulation is executed on a floating point architecture.
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
Can Family Planning Service Statistics Be Used to Track Population-Level Outcomes?
Magnani, Robert J; Ross, John; Williamson, Jessica; Weinberger, Michelle
2018-03-21
The need for annual family planning program tracking data under the Family Planning 2020 (FP2020) initiative has contributed to renewed interest in family planning service statistics as a potential data source for annual estimates of the modern contraceptive prevalence rate (mCPR). We sought to assess (1) how well a set of commonly recorded data elements in routine service statistics systems could, with some fairly simple adjustments, track key population-level outcome indicators, and (2) whether some data elements performed better than others. We used data from 22 countries in Africa and Asia to analyze 3 data elements collected from service statistics: (1) number of contraceptive commodities distributed to clients, (2) number of family planning service visits, and (3) number of current contraceptive users. Data quality was assessed via analysis of mean square errors, using the United Nations Population Division World Contraceptive Use annual mCPR estimates as the "gold standard." We also examined the magnitude of several components of measurement error: (1) variance, (2) level bias, and (3) slope (or trend) bias. Our results indicate modest levels of tracking error for data on commodities to clients (7%) and service visits (10%), and somewhat higher error rates for data on current users (19%). Variance and slope bias were relatively small for all data elements. Level bias was by far the largest contributor to tracking error. Paired comparisons of data elements in countries that collected at least 2 of the 3 data elements indicated a modest advantage of data on commodities to clients. None of the data elements considered was sufficiently accurate to be used to produce reliable stand-alone annual estimates of mCPR. However, the relatively low levels of variance and slope bias indicate that trends calculated from these 3 data elements can be productively used in conjunction with the Family Planning Estimation Tool (FPET) currently used to produce annual m
Vlahos, Loukas; Archontis, Vasilis; Isliker, Heinz
We consider 3D nonlinear MHD simulations of an emerging flux tube, from the convection zone into the corona, focusing on the coronal part of the simulations. We first analyze the statistical nature and spatial structure of the electric field, calculating histograms and making use of iso-contour visualizations. Then test-particle simulations are performed for electrons, in order to study heating and acceleration phenomena, as well as to determine HXR emission. This study is done by comparatively exploring quiet, turbulent explosive, and mildly explosive phases of the MHD simulations. Also, the importance of collisional and relativistic effects is assessed, and the role of the integration time is investigated. Particular aim of this project is to verify the quasi- linear assumptions made in standard transport models, and to identify possible transport effects that cannot be captured with the latter. In order to determine the relation of our results to Fermi acceleration and Fokker-Planck modeling, we determine the standard transport coefficients. After all, we find that the electric field of the MHD simulations must be downscaled in order to prevent an un-physically high degree of acceleration, and the value chosen for the scale factor strongly affects the results. In different MHD time-instances we find heating to take place, and acceleration that depends on the level of MHD turbulence. Also, acceleration appears to be a transient phenomenon, there is a kind of saturation effect, and the parallel dynamics clearly dominate the energetics. The HXR spectra are not yet really compatible with observations, we have though to further explore the scaling of the electric field and the integration times used.
Modeling and simulation of the agricultural sprayer boom leveling system
Sun, Jian
2011-01-01
According to the agricultural precision requirements, the distance from sprayer nozzles to the corps should be kept between 50 cm to 70 cm. The sprayer boom also needs to be kept parallel to the field during the application process. Thus we can guarantee the quality of the chemical droplets distribution on the crops. In this paper we design a sprayer boom leveling system for agricultural sprayer vehicles combined with a four-rod linkage self-leveling suspension and electro-hydraulic auto-leveling system. The dynamic analysis shows that the suspension can realize an excellent self-leveling in a comparative small inclination range. In addition we build compensation controller for the electro-hydraulic system based on the mathematical model. With simulations we can optimize the performance of this controller to make sure a fast leveling response to the inclined sprayer boom. © 2011 IEEE.
Large scale statistics for computational verification of grain growth simulations with experiments
International Nuclear Information System (INIS)
Demirel, Melik C.; Kuprat, Andrew P.; George, Denise C.; Straub, G.K.; Misra, Amit; Alexander, Kathleen B.; Rollett, Anthony D.
2002-01-01
It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. We have previously showed a strong similarity between small-scale grain growth experiments and anisotropic three-dimensional simulations obtained from the Electron Backscattered Diffraction (EBSD) measurements. Using the same technique, we obtained 5170-grain data from an Aluminum-film (120 (micro)m thick) with a columnar grain structure. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 C. Characterization of the structures and properties of grain boundary networks (GBN) to produce desirable microstructures is one of the fundamental problems in interface science. There is an ongoing research for the development of new experimental and analytical techniques in order to obtain and synthesize information related to GBN. The grain boundary energy and mobility data were characterized by Electron Backscattered Diffraction (EBSD) technique and Atomic Force Microscopy (AFM) observations (i.e., for ceramic MgO and for the metal Al). Grain boundary energies are extracted from triple junction (TJ) geometry considering the local equilibrium condition at TJ's. Relative boundary mobilities were also extracted from TJ's through a statistical/multiscale analysis. Additionally, there are recent theoretical developments of grain boundary evolution in microstructures. In this paper, a new technique for three-dimensional grain growth simulations was used to simulate interface migration
Kennedy, R R; Merry, A F
2011-09-01
Anaesthesia involves processing large amounts of information over time. One task of the anaesthetist is to detect substantive changes in physiological variables promptly and reliably. It has been previously demonstrated that a graphical trend display of historical data leads to more rapid detection of such changes. We examined the effect of a graphical indication of the magnitude of Trigg's Tracking Variable, a simple statistically based trend detection algorithm, on the accuracy and latency of the detection of changes in a micro-simulation. Ten anaesthetists each viewed 20 simulations with four variables displayed as the current value with a simple graphical trend display. Values for these variables were generated by a computer model, and updated every second; after a period of stability a change occurred to a new random value at least 10 units from baseline. In 50% of the simulations an indication of the rate of change was given by a five level graphical representation of the value of Trigg's Tracking Variable. Participants were asked to indicate when they thought a change was occurring. Changes were detected 10.9% faster with the trend indicator present (mean 13.1 [SD 3.1] cycles vs 14.6 [SD 3.4] cycles, 95% confidence interval 0.4 to 2.5 cycles, P = 0.013. There was no difference in accuracy of detection (median with trend detection 97% [interquartile range 95 to 100%], without trend detection 100% [98 to 100%]), P = 0.8. We conclude that simple statistical trend detection may speed detection of changes during routine anaesthesia, even when a graphical trend display is present.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Geant4 electromagnetic physics for high statistic simulation of LHC experiments
Allison, J; Bagulya, A; Champion, C; Elles, S; Garay, F; Grichine, V; Howard, A; Incerti, S; Ivanchenko, V; Jacquemier, J; Maire, M; Mantero, A; Nieminen, P; Pandola, L; Santin, G; Sawkey, D; Schalicke, A; Urban, L
2012-01-01
An overview of the current status of electromagnetic physics (EM) of the Geant4 toolkit is presented. Recent improvements are focused on the performance of large scale production for LHC and on the precision of simulation results over a wide energy range. Significant efforts have been made to improve the accuracy without compromising of CPU speed for EM particle transport. New biasing options have been introduced, which are applicable to any EM process. These include algorithms to enhance and suppress processes, force interactions or splitting of secondary particles. It is shown that the performance of the EM sub-package is improved. We will report extensions of the testing suite allowing high statistics validation of EM physics. It includes validation of multiple scattering, bremsstrahlung and other models. Cross checks between standard and low-energy EM models have been performed using evaluated data libraries and reference benchmark results.
Korenchenko, Anna E.; Vorontsov, Alexander G.; Gelchinski, Boris R.; Sannikov, Grigorii P.
2018-04-01
We discuss the problem of dimer formation during the homogeneous nucleation of atomic metal vapor in an inert gas environment. We simulated nucleation with molecular dynamics and carried out the statistical analysis of double- and triple-atomic collisions as the two ways of long-lived diatomic complex formation. Close pair of atoms with lifetime greater than the mean time interval between atom-atom collisions is called a long-lived diatomic complex. We found that double- and triple-atomic collisions gave approximately the same probabilities of long-lived diatomic complex formation, but internal energy of the resulted state was essentially lower in the second case. Some diatomic complexes formed in three-particle collisions are stable enough to be a critical nucleus.
A new equation of state Based on Nuclear Statistical Equilibrium for Core-Collapse Simulations
Furusawa, Shun; Yamada, Shoichi; Sumiyoshi, Kohsuke; Suzuki, Hideyuki
2012-09-01
We calculate a new equation of state for baryons at sub-nuclear densities for the use in core-collapse simulations of massive stars. The formulation is the nuclear statistical equilibrium description and the liquid drop approximation of nuclei. The model free energy to minimize is calculated by relativistic mean field theory for nucleons and the mass formula for nuclei with atomic number up to ~ 1000. We have also taken into account the pasta phase. We find that the free energy and other thermodynamical quantities are not very different from those given in the standard EOSs that adopt the single nucleus approximation. On the other hand, the average mass is systematically different, which may have an important effect on the rates of electron captures and coherent neutrino scatterings on nuclei in supernova cores.
Simulation of statistical systems with not necessarily real and positive probabilities
International Nuclear Information System (INIS)
Kalkreuter, T.
1991-01-01
A new method to determine expectation values of observables in statistical systems with not necessarily real and positive probabilities is proposed. It is tested in a numerical study of the two-dimensional O(3)-symmetric nonlinear σ-model with Symanzik's one-loop improved lattice action. This model is simulated as polymer system with field dependent activities which can be made positive definite or indefinite by adjusting additive constants of the action. For a system with indefinite activities the new proposal is found to work. It is also verified that local observables are not affected by far-away ploymers with indefinite activities when the system has no long-range order. (orig.)
Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations
Energy Technology Data Exchange (ETDEWEB)
Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.
2010-12-22
In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to
Leaching behavior of simulated high-level waste glass
International Nuclear Information System (INIS)
Kamizono, Hiroshi
1987-03-01
The author's work in the study on the leaching behavior of simulated high-level waste (HLW) glass were summarized. The subjects described are (1) leach rates at high temperatures, (2) effects of cracks on leach rates, (3) effects of flow rate on leach rates, and (4) an in-situ burial test in natural groundwater. In the following section, the leach rates obtained by various experiments were summarized and discussed. (author)
Oreopoulos, Lazaros
2004-01-01
The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.
Statistical properties of the linear σ model used in dynamical simulations of DCC formation
International Nuclear Information System (INIS)
Randrup, J.
1997-01-01
The present work develops a simple approximate framework for initializing and interpreting dynamical simulations with the linear σ model exploring the formation of disoriented chiral condensates in high-energy collisions. By enclosing the system in a rectangular box with periodic boundary conditions, it is possible to decompose uniquely the chiral field into its spatial average (the order parameter) and its fluctuations (the quasiparticles) which can be treated in the Hartree approximation. The quasiparticle modes are then described approximately by Klein-Gordon dispersion relations containing an effective mass depending on both the temperature and the magnitude of the order parameter; their fluctuations are instrumental in shaping the effective potential governing the order parameter, and the emerging statistical description is thermodynamicially consistent. The temperature dependence of the statistical distribution of the order parameter is discussed, as is the behavior of the associated effective masses; as the system is cooled, the field fluctuations subside, causing a smooth change from the high-temperature phase in which chiral symmetry is approximately restored towards the normal phase. Of practical interest is the fact that the equilibrium field configurations can be sampled in a simple manner, thus providing a convenient means for specifying the initial conditions in dynamical simulations of the nonequilibrium relaxation of the chiral field; in particular, the correlation function is much more realistic than those emerging in previous initialization methods. It is illustrated how such samples remain approximately invariant under propagation by the unapproximated equation of motion over times that are long on the scale of interest, thereby suggesting that the treatment is sufficiently accurate to be of practical utility. copyright 1997 The American Physical Society
International Nuclear Information System (INIS)
Belianinov, Alex; Ganesh, Panchapakesan; Lin, Wenzhi; Jesse, Stephen; Pan, Minghu; Kalinin, Sergei V.; Sales, Brian C.; Sefat, Athena S.
2014-01-01
Atomic level spatial variability of electronic structure in Fe-based superconductor FeTe 0.55 Se 0.45 (T c = 15 K) is explored using current-imaging tunneling-spectroscopy. Multivariate statistical analysis of the data differentiates regions of dissimilar electronic behavior that can be identified with the segregation of chalcogen atoms, as well as boundaries between terminations and near neighbor interactions. Subsequent clustering analysis allows identification of the spatial localization of these dissimilar regions. Similar statistical analysis of modeled calculated density of states of chemically inhomogeneous FeTe 1−x Se x structures further confirms that the two types of chalcogens, i.e., Te and Se, can be identified by their electronic signature and differentiated by their local chemical environment. This approach allows detailed chemical discrimination of the scanning tunneling microscopy data including separation of atomic identities, proximity, and local configuration effects and can be universally applicable to chemically and electronically inhomogeneous surfaces
Misuse of statistics in the interpretation of data on low-level radiation
International Nuclear Information System (INIS)
Hamilton, L.D.
1982-01-01
Four misuses of statistics in the interpretation of data of low-level radiation are reviewed: (1) post-hoc analysis and aggregation of data leading to faulty conclusions in the reanalysis of genetic effects of the atomic bomb, and premature conclusions on the Portsmouth Naval Shipyard data; (2) inappropriate adjustment for age and ignoring differences between urban and rural areas leading to potentially spurious increase in incidence of cancer at Rocky Flats; (3) hazard of summary statistics based on ill-conditioned individual rates leading to spurious association between childhood leukemia and fallout in Utah; and (4) the danger of prematurely published preliminary work with inadequate consideration of epidemiological problems - censored data - leading to inappropriate conclusions, needless alarm at the Portsmouth Naval Shipyard, and diversion of scarce research funds
Misuse of statistics in the interpretation of data on low-level radiation
Energy Technology Data Exchange (ETDEWEB)
Hamilton, L.D.
1982-01-01
Four misuses of statistics in the interpretation of data of low-level radiation are reviewed: (1) post-hoc analysis and aggregation of data leading to faulty conclusions in the reanalysis of genetic effects of the atomic bomb, and premature conclusions on the Portsmouth Naval Shipyard data; (2) inappropriate adjustment for age and ignoring differences between urban and rural areas leading to potentially spurious increase in incidence of cancer at Rocky Flats; (3) hazard of summary statistics based on ill-conditioned individual rates leading to spurious association between childhood leukemia and fallout in Utah; and (4) the danger of prematurely published preliminary work with inadequate consideration of epidemiological problems - censored data - leading to inappropriate conclusions, needless alarm at the Portsmouth Naval Shipyard, and diversion of scarce research funds.
Process simulation and statistical approaches for validating waste form qualification models
International Nuclear Information System (INIS)
Kuhn, W.L.; Toland, M.R.; Pulsipher, B.A.
1989-05-01
This report describes recent progress toward one of the principal objectives of the Nuclear Waste Treatment Program (NWTP) at the Pacific Northwest Laboratory (PNL): to establish relationships between vitrification process control and glass product quality. during testing of a vitrification system, it is important to show that departures affecting the product quality can be sufficiently detected through process measurements to prevent an unacceptable canister from being produced. Meeting this goal is a practical definition of a successful sampling, data analysis, and process control strategy. A simulation model has been developed and preliminarily tested by applying it to approximate operation of the West Valley Demonstration Project (WVDP) vitrification system at West Valley, New York. Multivariate statistical techniques have been identified and described that can be applied to analyze large sets of process measurements. Information on components, tanks, and time is then combined to create a single statistic through which all of the information can be used at once to determine whether the process has shifted away from a normal condition
Statistics for long irregular wave run-up on a plane beach from direct numerical simulations
Didenkulova, Ira; Senichev, Dmitry; Dutykh, Denys
2017-04-01
Very often for global and transoceanic events, due to the initial wave transformation, refraction, diffraction and multiple reflections from coastal topography and underwater bathymetry, the tsunami approaches the beach as a very long wave train, which can be considered as an irregular wave field. The prediction of possible flooding and properties of the water flow on the coast in this case should be done statistically taking into account the formation of extreme (rogue) tsunami wave on a beach. When it comes to tsunami run-up on a beach, the most used mathematical model is the nonlinear shallow water model. For a beach of constant slope, the nonlinear shallow water equations have rigorous analytical solution, which substantially simplifies the mathematical formulation. In (Didenkulova et al. 2011) we used this solution to study statistical characteristics of the vertical displacement of the moving shoreline and its horizontal velocity. The influence of the wave nonlinearity was approached by considering modifications of probability distribution of the moving shoreline and its horizontal velocity for waves of different amplitudes. It was shown that wave nonlinearity did not affect the probability distribution of the velocity of the moving shoreline, while the vertical displacement of the moving shoreline was affected substantially demonstrating the longer duration of coastal floods with an increase in the wave nonlinearity. However, this analysis did not take into account the actual transformation of irregular wave field offshore to oscillations of the moving shoreline on a slopping beach. In this study we would like to cover this gap by means of extensive numerical simulations. The modeling is performed in the framework of nonlinear shallow water equations, which are solved using a modern shock-capturing finite volume method. Although the shallow water model does not pursue the wave breaking and bore formation in a general sense (including the water surface
Molecular-Level Simulations of the Turbulent Taylor-Green Flow
Gallis, M. A.; Bitter, N. P.; Koehler, T. P.; Plimpton, S. J.; Torczynski, J. R.; Papadakis, G.
2017-11-01
The Direct Simulation Monte Carlo (DSMC) method, a statistical, molecular-level technique that provides accurate solutions to the Boltzmann equation, is applied to the turbulent Taylor-Green vortex flow. The goal of this work is to investigate whether DSMC can accurately simulate energy decay in a turbulent flow. If so, then simulating turbulent flows at the molecular level can provide new insights because the energy decay can be examined in detail from molecular to macroscopic length scales, thereby directly linking molecular relaxation processes to macroscopic transport processes. The DSMC simulations are performed on half a million cores of Sequoia, the 17 Pflop platform at Lawrence Livermore National Laboratory, and the kinetic-energy dissipation rate and the energy spectrum are computed directly from the molecular velocities. The DSMC simulations are found to reproduce the Kolmogorov -5/3 law and to agree with corresponding Navier-Stokes simulations obtained using a spectral method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona
2012-01-01
Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical
Li, Zuqun
2011-01-01
Modeling and Simulation plays a very important role in mission design. It not only reduces design cost, but also prepares astronauts for their mission tasks. The SISO Smackdown is a simulation event that facilitates modeling and simulation in academia. The scenario of this year s Smackdown was to simulate a lunar base supply mission. The mission objective was to transfer Earth supply cargo to a lunar base supply depot and retrieve He-3 to take back to Earth. Federates for this scenario include the environment federate, Earth-Moon transfer vehicle, lunar shuttle, lunar rover, supply depot, mobile ISRU plant, exploratory hopper, and communication satellite. These federates were built by teams from all around the world, including teams from MIT, JSC, University of Alabama in Huntsville, University of Bordeaux from France, and University of Genoa from Italy. This paper focuses on the lunar shuttle federate, which was programmed by the USRP intern team from NASA JSC. The shuttle was responsible for provide transportation between lunar orbit and the lunar surface. The lunar shuttle federate was built using the NASA standard simulation package called Trick, and it was extended with HLA functions using TrickHLA. HLA functions of the lunar shuttle federate include sending and receiving interaction, publishing and subscribing attributes, and packing and unpacking fixed record data. The dynamics model of the lunar shuttle was modeled with three degrees of freedom, and the state propagation was obeying the law of two body dynamics. The descending trajectory of the lunar shuttle was designed by first defining a unique descending orbit in 2D space, and then defining a unique orbit in 3D space with the assumption of a non-rotating moon. Finally this assumption was taken away to define the initial position of the lunar shuttle so that it will start descending a second after it joins the execution. VPN software from SonicWall was used to connect federates with RTI during testing
Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
Directory of Open Access Journals (Sweden)
Andreas Schneck
2017-11-01
Full Text Available Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. Methods Four tests on publication bias, Egger’s test (FAT, p-uniform, the test of excess significance (TES, as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100% were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5, effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500, and the number of observations for the publication bias tests (K = 100, 1,000 were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a
Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani
2010-01-01
This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...
Non-statistically populated autoionizing levels of Li-like carbon: Hidden-crossings
International Nuclear Information System (INIS)
Deveney, E.F.; Krause, H.F.; Jones, N.L.
1995-01-01
The intensities of the Auger-electron lines from autoionizing (AI) states of Li-like (1s2s2l) configurations excited in ion-atom collisions vary as functions of the collision parameters such as, for example, the collision velocity. A statistical population of the three-electron levels is at best incomplete and underscores the intricate dynamical development of the electronic states. The authors compare several experimental studies to calculations using ''hidden-crossing'' techniques to explore some of the details of these Auger-electron intensity variation phenomena. The investigations show promising results suggesting that Auger-electron intensity variations can be used to probe collision dynamics
The statistical interpretations of counting data from measurements of low-level radioactivity
International Nuclear Information System (INIS)
Donn, J.J.; Wolke, R.L.
1977-01-01
The statistical model appropriate to measurements of low-level or background-dominant radioactivity is examined and the derived relationships are applied to two practical problems involving hypothesis testing: 'Does the sample exhibit a net activity above background' and 'Is the activity of the sample below some preselected limit'. In each of these cases, the appropriate decision rule is formulated, procedures are developed for estimating the preset count which is necessary to achieve a desired probability of detection, and a specific sequence of operations is provided for the worker in the field. (author)
STATCONT: A statistical continuum level determination method for line-rich sources
Sánchez-Monge, Á.; Schilke, P.; Ginsburg, A.; Cesaroni, R.; Schmiedeke, A.
2018-01-01
STATCONT is a python-based tool designed to determine the continuum emission level in spectral data, in particular for sources with a line-rich spectrum. The tool inspects the intensity distribution of a given spectrum and automatically determines the continuum level by using different statistical approaches. The different methods included in STATCONT are tested against synthetic data. We conclude that the sigma-clipping algorithm provides the most accurate continuum level determination, together with information on the uncertainty in its determination. This uncertainty can be used to correct the final continuum emission level, resulting in the here called `corrected sigma-clipping method' or c-SCM. The c-SCM has been tested against more than 750 different synthetic spectra reproducing typical conditions found towards astronomical sources. The continuum level is determined with a discrepancy of less than 1% in 50% of the cases, and less than 5% in 90% of the cases, provided at least 10% of the channels are line free. The main products of STATCONT are the continuum emission level, together with a conservative value of its uncertainty, and datacubes containing only spectral line emission, i.e., continuum-subtracted datacubes. STATCONT also includes the option to estimate the spectral index, when different files covering different frequency ranges are provided.
A Statistical Study of Serum Cholesterol Level by Gender and Race.
Tharu, Bhikhari Prasad; Tsokos, Chris P
2017-07-25
Cholesterol level (CL) is growing concerned as health issue in human health since it is considered one of the causes in heart diseases. A study of cholesterol level can provide insight about its nature and characteristics. A cross-sectional study. National Health and Nutrition Examination Survey (NHANS) II was conducted on a probability sample of approximately 28,000 persons in the USA and cholesterol level is obtained from laboratory results. Samples were selected so that certain population groups thought to be at high risk of malnutrition. Study included 11,864 persons for CL cases with 9,602 males and 2,262 females with races: whites, blacks, and others. Non-parametric statistical tests and goodness of fit test have been used to identify probability distributions. The study concludes that the cholesterol level exhibits significant racial and gender differences in terms of probability distributions. The study has concluded that white people are relatively higher at risk than black people to have risk line and high risk cholesterol. The study clearly indicates that black males normally have higher cholesterol. Females have lower variation in cholesterol than males. There exists gender and racial discrepancies in cholesterol which has been identified as lognormal and gamma probability distributions. White individuals seem to be at a higher risk of having high risk cholesterol level than blacks. Females tend to have higher variation in cholesterol level than males.
Powder technological vitrification of simulated high-level waste
International Nuclear Information System (INIS)
Gahlert, S.
1988-03-01
High-level waste simulate from the reprocessing of light water reactor and fast breeder fuel was vitrified by powder technology. After denitration with formaldehyde, the simulated HLW is mixed with glass frit and simultaneously dried in an oil-heated mixer. After 'in-can calcination' for at least 24 hours at 850 or 950 K (depending on the type of waste and glass), the mixture is hot-pressed in-can for several hours at 920 or 1020 K respectively, at pressures between 0.4 and 1.0 MPa. The technology has been demonstrated inactively up to diameters of 30 cm. Leach resistance is significantly enhanced when compared to common borosilicate glasses by the utilization of glasses with higher silicon and aluminium content and lower sodium content. (orig.) [de
Naik, Ganesh R; Kumar, Dinesh K
2011-01-01
The electromyograpy (EMG) signal provides information about the performance of muscles and nerves. The shape of the muscle signal and motor unit action potential (MUAP) varies due to the movement of the position of the electrode or due to changes in contraction level. This research deals with evaluating the non-Gaussianity in Surface Electromyogram signal (sEMG) using higher order statistics (HOS) parameters. To achieve this, experiments were conducted for four different finger and wrist actions at different levels of Maximum Voluntary Contractions (MVCs). Our experimental analysis shows that at constant force and for non-fatiguing contractions, probability density functions (PDF) of sEMG signals were non-Gaussian. For lesser MVCs (below 30% of MVC) PDF measures tends to be Gaussian process. The above measures were verified by computing the Kurtosis values for different MVCs.
Usage of link-level performance indicators for HSDPA network-level simulations in E-UMTS
Brouwer, Frank; de Bruin, I.C.C.; Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Correia, Américo
2004-01-01
The paper describes integration of HSDPA (high-speed downlink packet access) link-level simulation results into network-level simulations for enhanced UMTS. The link-level simulations model all physical layer features depicted in the 3GPP standards. These include: generation of transport blocks;
Mukhadiyev, Nurzhan
2017-05-01
Combustion at extreme conditions, such as a turbulent flame at high Karlovitz and Reynolds numbers, is still a vast and an uncertain field for researchers. Direct numerical simulation of a turbulent flame is a superior tool to unravel detailed information that is not accessible to most sophisticated state-of-the-art experiments. However, the computational cost of such simulations remains a challenge even for modern supercomputers, as the physical size, the level of turbulence intensity, and chemical complexities of the problems continue to increase. As a result, there is a strong demand for computational cost reduction methods as well as in acceleration of existing methods. The main scope of this work was the development of computational and numerical tools for high-fidelity direct numerical simulations of premixed planar flames interacting with turbulence. The first part of this work was KAUST Adaptive Reacting Flow Solver (KARFS) development. KARFS is a high order compressible reacting flow solver using detailed chemical kinetics mechanism; it is capable to run on various types of heterogeneous computational architectures. In this work, it was shown that KARFS is capable of running efficiently on both CPU and GPU. The second part of this work was numerical tools for direct numerical simulations of planar premixed flames: such as linear turbulence forcing and dynamic inlet control. DNS of premixed turbulent flames conducted previously injected velocity fluctuations at an inlet. Turbulence injected at the inlet decayed significantly while reaching the flame, which created a necessity to inject higher than needed fluctuations. A solution for this issue was to maintain turbulence strength on the way to the flame using turbulence forcing. Therefore, a linear turbulence forcing was implemented into KARFS to enhance turbulence intensity. Linear turbulence forcing developed previously by other groups was corrected with net added momentum removal mechanism to prevent mean
ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION
Directory of Open Access Journals (Sweden)
C. Li
2012-07-01
Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Directory of Open Access Journals (Sweden)
A.-S. Høyer
2017-12-01
Full Text Available Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i realistic 3-D training images and (ii an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical
Bayesian Statistical Analysis of Historical and Late Holocene Rates of Sea-Level Change
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
A fundamental concern associated with climate change is the rate at which sea levels are rising. Studies of past sea level (particularly beyond the instrumental data range) allow modern sea-level rise to be placed in a more complete context. Considering this, we perform a Bayesian statistical analysis on historical and late Holocene rates of sea-level change. The data that form the input to the statistical model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. The aims are to estimate rates of sea-level rise, to determine when modern rates of sea-level rise began and to observe how these rates have been changing over time. Many of the current methods for doing this use simple linear regression to estimate rates. This is often inappropriate as it is too rigid and it can ignore uncertainties that arise as part of the data collection exercise. This can lead to over confidence in the sea-level trends being characterized. The proposed Bayesian model places a Gaussian process prior on the rate process (i.e. the process that determines how rates of sea-level are changing over time). The likelihood of the observed data is the integral of this process. When dealing with proxy reconstructions, this is set in an errors-in-variables framework so as to take account of age uncertainty. It is also necessary, in this case, for the model to account for glacio-isostatic adjustment, which introduces a covariance between individual age and sea-level observations. This method provides a flexible fit and it allows for the direct estimation of the rate process with full consideration of all sources of uncertainty. Analysis of tide-gauge datasets and proxy reconstructions in this way means that changing rates of sea level can be estimated more comprehensively and accurately than previously possible. The model captures the continuous and dynamic evolution of sea-level change and results show that not only are modern sea levels rising but that the rates
Crane, R. K.
1975-01-01
An experiment was conducted to study the relations between the empirical distribution functions of reflectivity at specified locations above the surface and the corresponding functions at the surface. A bistatic radar system was used to measure continuously the scattering cross section per unit volume at heights of 3 and 6 km. A frequency of 3.7 GHz was used in the tests. It was found that the distribution functions for reflectivity may significantly change with height at heights below the level of the melting layer.
International Nuclear Information System (INIS)
Sadovich, S.; Burnos, V.; Kiyavitskaya, H.; Fokov, Y.; Talamo, A.
2013-01-01
In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source (PNS) and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulating PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into the simulation. The methods described above have been tested on the sub-critical assembly Yalina-Thermal, located in the Joint Institute for Power and Nuclear Research SOSNY in Minsk (Belarus). A good agreement between experiment and simulation was shown. (authors)
Directory of Open Access Journals (Sweden)
Land Walker H
2011-01-01
Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.
West Valley high-level nuclear waste glass development: a statistically designed mixture study
Energy Technology Data Exchange (ETDEWEB)
Chick, L.A.; Bowen, W.M.; Lokken, R.O.; Wald, J.W.; Bunnell, L.R.; Strachan, D.M.
1984-10-01
The first full-scale conversion of high-level commercial nuclear wastes to glass in the United States will be conducted at West Valley, New York, by West Valley Nuclear Services Company, Inc. (WVNS), for the US Department of Energy. Pacific Northwest Laboratory (PNL) is supporting WVNS in the design of the glass-making process and the chemical formulation of the glass. This report describes the statistically designed study performed by PNL to develop the glass composition recommended for use at West Valley. The recommended glass contains 28 wt% waste, as limited by process requirements. The waste loading and the silica content (45 wt%) are similar to those in previously developed waste glasses; however, the new formulation contains more calcium and less boron. A series of tests verified that the increased calcium results in improved chemical durability and does not adversely affect the other modeled properties. The optimization study assessed the effects of seven oxide components on glass properties. Over 100 melts combining the seven components into a wide variety of statistically chosen compositions were tested. Viscosity, electrical conductivity, thermal expansion, crystallinity, and chemical durability were measured and empirically modeled as a function of the glass composition. The mathematical models were then used to predict the optimum formulation. This glass was tested and adjusted to arrive at the final composition recommended for use at West Valley. 56 references, 49 figures, 18 tables.
Random matrix theory of the energy-level statistics of disordered systems at the Anderson transition
International Nuclear Information System (INIS)
Canali, C.M.
1995-09-01
We consider a family of random matrix ensembles (RME) invariant under similarity transformations and described by the probability density P(H) exp[-TrV(H)]. Dyson's mean field theory (MFT) of the corresponding plasma model of eigenvalues is generalized to the case of weak confining potential, V(is an element of) ∼ A/2 ln 2 (is an element of). The eigenvalue statistics derived from MFT are shown to deviate substantially from the classical Wigner-Dyson statistics when A c approx. 0.4 the distribution function of the level spacings (LSDF) coincides in a large energy window with the energy LSDF of the three dimensional Anderson model at the metal-insulator transition. For the same A = A c , the RME eigenvalue-number variance is linear and its slope is equal to 0.32 ± 0.02, which is consistent with the value found for the Anderson model at the critical point. (author). 51 refs, 10 figs
Nuclear and Particle Physics Simulations: The Consortium of Upper-Level Physics Software
Bigelow, Roberta; Moloney, Michael J.; Philpott, John; Rothberg, Joseph
1995-06-01
The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.
The Vienna LTE-advanced simulators up and downlink, link and system level simulation
Rupp, Markus; Taranetz, Martin
2016-01-01
This book introduces the Vienna Simulator Suite for 3rd-Generation Partnership Project (3GPP)-compatible Long Term Evolution-Advanced (LTE-A) simulators and presents applications to demonstrate their uses for describing, designing, and optimizing wireless cellular LTE-A networks. Part One addresses LTE and LTE-A link level techniques. As there has been high demand for the downlink (DL) simulator, it constitutes the central focus of the majority of the chapters. This part of the book reports on relevant highlights, including single-user (SU), multi-user (MU) and single-input-single-output (SISO) as well as multiple-input-multiple-output (MIMO) transmissions. Furthermore, it summarizes the optimal pilot pattern for high-speed communications as well as different synchronization issues. One chapter is devoted to experiments that show how the link level simulator can provide input to a testbed. This section also uses measurements to present and validate fundamental results on orthogonal frequency division multiple...
Protein logic: a statistical mechanical study of signal integration at the single-molecule level.
de Ronde, Wiet; Rein ten Wolde, Pieter; Mugler, Andrew
2012-09-05
Information processing and decision-making is based upon logic operations, which in cellular networks has been well characterized at the level of transcription. In recent years, however, both experimentalists and theorists have begun to appreciate that cellular decision-making can also be performed at the level of a single protein, giving rise to the notion of protein logic. Here we systematically explore protein logic using a well-known statistical mechanical model. As an example system, we focus on receptors that bind either one or two ligands, and their associated dimers. Notably, we find that a single heterodimer can realize any of the 16 possible logic gates, including the XOR gate, by variation of biochemical parameters. We then introduce what to our knowledge is a novel idea: that a set of receptors with fixed parameters can encode functionally unique logic gates simply by forming different dimeric combinations. An exhaustive search reveals that the simplest set of receptors (two single-ligand receptors and one double-ligand receptor) can realize several different groups of three unique gates, a result for which the parametric analysis of single receptors and dimers provides a clear interpretation. Both results underscore the surprising functional freedom readily available to cells at the single-protein level. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Kowal, G [Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Rua do Matao 1226, 05508-900, Sao Paulo (Brazil); Falceta-Goncalves, D A; Lazarian, A, E-mail: kowal@astro.iag.usp.br [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States)
2011-05-15
In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e.g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of
International Nuclear Information System (INIS)
Kowal, G; Falceta-Goncalves, D A; Lazarian, A
2011-01-01
In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e.g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of
Bansal, Ravi; Peterson, Bradley S
2018-06-01
Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal
Effects of intra-fraction motion on IMRT dose delivery: statistical analysis and simulation
International Nuclear Information System (INIS)
Bortfeld, Thomas; Jokivarsi, Kimmo; Goitein, Michael; Kung, Jong; Jiang, Steve B.
2002-01-01
There has been some concern that organ motion, especially intra-fraction organ motion due to breathing, can negate the potential merit of intensity-modulated radiotherapy (IMRT). We wanted to find out whether this concern is justified. Specifically, we wanted to investigate whether IMRT delivery techniques with moving parts, e.g., with a multileaf collimator (MLC), are particularly sensitive to organ motion due to the interplay between organ motion and leaf motion. We also wanted to know if, and by how much, fractionation of the treatment can reduce the effects. We performed a statistical analysis and calculated the expected dose values and dose variances for volume elements of organs that move during the delivery of the IMRT. We looked at the overall influence of organ motion during the course of a fractionated treatment. A linear-quadratic model was used to consider fractionation effects. Furthermore, we developed software to simulate motion effects for IMRT delivery with an MLC, with compensators, and with a scanning beam. For the simulation we assumed a sinusoidal motion in an isocentric plane. We found that the expected dose value is independent of the treatment technique. It is just a weighted average over the path of motion of the dose distribution without motion. If the treatment is delivered in several fractions, the distribution of the dose around the expected value is close to a Gaussian. For a typical treatment with 30 fractions, the standard deviation is generally within 1% of the expected value for MLC delivery if one assumes a typical motion amplitude of 5 mm (1 cm peak to peak). The standard deviation is generally even smaller for the compensator but bigger for scanning beam delivery. For the latter it can be reduced through multiple deliveries ('paintings') of the same field. In conclusion, the main effect of organ motion in IMRT is an averaging of the dose distribution without motion over the path of the motion. This is the same as for treatments
Gradient augmented level set method for phase change simulations
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
Simulating snow maps for Norway: description and statistical evaluation of the seNorge snow model
Directory of Open Access Journals (Sweden)
T. M. Saloranta
2012-11-01
Full Text Available Daily maps of snow conditions have been produced in Norway with the seNorge snow model since 2004. The seNorge snow model operates with 1 × 1 km resolution, uses gridded observations of daily temperature and precipitation as its input forcing, and simulates, among others, snow water equivalent (SWE, snow depth (SD, and the snow bulk density (ρ. In this paper the set of equations contained in the seNorge model code is described and a thorough spatiotemporal statistical evaluation of the model performance from 1957–2011 is made using the two major sets of extensive in situ snow measurements that exist for Norway. The evaluation results show that the seNorge model generally overestimates both SWE and ρ, and that the overestimation of SWE increases with elevation throughout the snow season. However, the R^{2}-values for model fit are 0.60 for (log-transformed SWE and 0.45 for ρ, indicating that after removal of the detected systematic model biases (e.g. by recalibrating the model or expressing snow conditions in relative units the model performs rather well. The seNorge model provides a relatively simple, not very data-demanding, yet nonetheless process-based method to construct snow maps of high spatiotemporal resolution. It is an especially well suited alternative for operational snow mapping in regions with rugged topography and large spatiotemporal variability in snow conditions, as is the case in the mountainous Norway.
Zimmerman, Whitney Alicia; Johnson, Glenn
2017-01-01
Data were collected from 353 online undergraduate introductory statistics students at the beginning of a semester using the Goals and Outcomes Associated with Learning Statistics (GOALS) instrument and an abbreviated form of the Statistics Anxiety Rating Scale (STARS). Data included a survey of expected grade, expected time commitment, and the…
Statistical analysis to assess automated level of suspicion scoring methods in breast ultrasound
Galperin, Michael
2003-05-01
A well-defined rule-based system has been developed for scoring 0-5 the Level of Suspicion (LOS) based on qualitative lexicon describing the ultrasound appearance of breast lesion. The purposes of the research are to asses and select one of the automated LOS scoring quantitative methods developed during preliminary studies in benign biopsies reduction. The study has used Computer Aided Imaging System (CAIS) to improve the uniformity and accuracy of applying the LOS scheme by automatically detecting, analyzing and comparing breast masses. The overall goal is to reduce biopsies on the masses with lower levels of suspicion, rather that increasing the accuracy of diagnosis of cancers (will require biopsy anyway). On complex cysts and fibroadenoma cases experienced radiologists were up to 50% less certain in true negatives than CAIS. Full correlation analysis was applied to determine which of the proposed LOS quantification methods serves CAIS accuracy the best. This paper presents current results of applying statistical analysis for automated LOS scoring quantification for breast masses with known biopsy results. It was found that First Order Ranking method yielded most the accurate results. The CAIS system (Image Companion, Data Companion software) is developed by Almen Laboratories and was used to achieve the results.
Statistical assessment of the 137Cs levels of the Chernihiv oblast's milk
International Nuclear Information System (INIS)
Lev, T.D.; Zakhutska, O.M.
2004-01-01
The article deals with research directed on overcoming the consequences of the Chornobyl accident at the territory of Ukraine. Results are considered of the use of the long-normal distribution law to evaluate results of 137Cs milk contamination. Critical farms of Chernihiv oblast, where agreeing criteria for assessing the primary data on milk contamination were applied, became the object of the study. An algorithm was applied to calculate factual and forecast repetitions of gradations according to the stages of statistical processing of milk samples contaminated with 137Cs. Results of the milk contamination analysis at a later stage (1991-2001)are described by the long-normal distribution law which can be used to forecast for the subsequent years. The maximum repetability of the gradations of the contaminated milk (from 10 to 40 Bq/l) is determined for factual and forecast frequencies of the levels of contamination thereof. The results of the study are proposed to be used while taking measures directed on diminishing the levels of contamination of agricultural products with 137Cs
A statistical approach to evaluate flood risk at the regional level: an application to Italy
Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea
2016-04-01
Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate
Multidirectional testing of one- and two-level ProDisc-L versus simulated fusions.
Panjabi, Manohar; Henderson, Gweneth; Abjornson, Celeste; Yue, James
2007-05-20
An in vitro human cadaveric biomechanical study. To evaluate intervertebral rotation changes due to lumbar ProDisc-L compared with simulated fusion, using follower load and multidirectional testing. Artificial discs, as opposed to the fusions, are thought to decrease the long-term accelerated degeneration at adjacent levels. A biomechanical assessment can be helpful, as the long-term clinical evaluation is impractical. Six fresh human cadaveric lumbar specimens (T12-S1) underwent multidirectional testing in flexion-extension, bilateral lateral bending, and bilateral torsion using the Hybrid test method. First, intact specimen total range of rotation (T12-S1) was determined. Second, using pure moments again, this range of rotation was achieved in each of the 5 constructs: A) ProDisc-L at L5-S1; B) fusion at L5-S1; C) ProDisc-L at L4-L5 and fusion at L5-S1; D) ProDisc-L at L4-L5 and L5-S1; and E) 2-level fusion at L4-L5 to L5-S1. Significant changes in the intervertebral rotations due to each construct were determined at the operated and nonoperated levels using repeated measures single factor ANOVA and Bonferroni statistical tests (P < 0.05). Adjacent-level effects (ALEs) were defined as the percentage changes in intervertebral rotations at the nonoperated levels due to the constructs. One- and 2-level ProDisc-L constructs showed only small ALE in any of the 3 rotations. In contrast, 1- and 2-level fusions showed increased ALE in all 3 directions (average, 7.8% and 35.3%, respectively, for 1 and 2 levels). In the disc plus fusion combination (construct C), the ALEs were similar to the 1-level fusion alone. In general, ProDisc-L preserved physiologic motions at all spinal levels, while the fusion simulations resulted in significant ALE.
Macro Level Simulation Model Of Space Shuttle Processing
2000-01-01
The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
Handsheet formation and mechanical testing via fiber-level simulations
Leonard H. Switzer; Daniel J. Klingenberg; C. Tim Scott
2004-01-01
A fiber model and simulation method are employed to investigate the mechanical response of planar fiber networks subjected to elongational deformation. The simulated responses agree qualitatively with numerous experimental observations. suggesting that such simulation methods may be useful for probing the relationships between fiber properties and interactions and the...
The Canopy Graph and Level Statistics for Random Operators on Trees
International Nuclear Information System (INIS)
Aizenman, Michael; Warzel, Simone
2006-01-01
For operators with homogeneous disorder, it is generally expected that there is a relation between the spectral characteristics of a random operator in the infinite setup and the distribution of the energy gaps in its finite volume versions, in corresponding energy ranges. Whereas pure point spectrum of the infinite operator goes along with Poisson level statistics, it is expected that purely absolutely continuous spectrum would be associated with gap distributions resembling the corresponding random matrix ensemble. We prove that on regular rooted trees, which exhibit both spectral types, the eigenstate point process has always Poissonian limit. However, we also find that this does not contradict the picture described above if that is carefully interpreted, as the relevant limit of finite trees is not the infinite homogenous tree graph but rather a single-ended 'canopy graph.' For this tree graph, the random Schroedinger operator is proven here to have only pure-point spectrum at any strength of the disorder. For more general single-ended trees it is shown that the spectrum is always singular - pure point possibly with singular continuous component which is proven to occur in some cases
Han, S. M.; Hahm, I.
2015-12-01
We evaluated the background noise level of seismic stations in order to collect the observation data of high quality and produce accurate seismic information. Determining of the background noise level was used PSD (Power Spectral Density) method by McNamara and Buland (2004) in this study. This method that used long-term data is influenced by not only innate electronic noise of sensor and a pulse wave resulting from stabilizing but also missing data and controlled by the specified frequency which is affected by the irregular signals without site characteristics. It is hard and inefficient to implement process that filters out the abnormal signal within the automated system. To solve these problems, we devised a method for extracting the data which normally distributed with 90 to 99% confidence intervals at each period. The availability of the method was verified using 62-seismic stations with broadband and short-period sensors operated by the KMA (Korea Meteorological Administration). Evaluation standards were NHNM (New High Noise Model) and NLNM (New Low Noise Model) published by the USGS (United States Geological Survey). It was designed based on the western United States. However, Korean Peninsula surrounded by the ocean on three sides has a complicated geological structure and a high population density. So, we re-designed an appropriate model in Korean peninsula by statistically combined result. The important feature is that secondary-microseism peak appeared at a higher frequency band. Acknowledgements: This research was carried out as a part of "Research for the Meteorological and Earthquake Observation Technology and Its Application" supported by the 2015 National Institute of Meteorological Research (NIMR) in the Korea Meteorological Administration.
Directory of Open Access Journals (Sweden)
Isabel M. Joao
2017-09-01
Full Text Available Projects thematically focused on simulation and statistical techniques for designing and optimizing chemical processes can be helpful in chemical engineering education in order to meet the needs of engineers. We argue for the relevance of the projects to improve a student centred approach and boost higher order thinking skills. This paper addresses the use of Aspen HYSYS by Portuguese chemical engineering master students to model distillation systems together with statistical experimental design techniques in order to optimize the systems highlighting the value of applying problem specific knowledge, simulation tools and sound statistical techniques. The paper summarizes the work developed by the students in order to model steady-state processes, dynamic processes and optimize the distillation systems emphasizing the benefits of the simulation tools and statistical techniques in helping the students learn how to learn. Students strengthened their domain specific knowledge and became motivated to rethink and improve chemical processes in their future chemical engineering profession. We discuss the main advantages of the methodology from the students’ and teachers perspective
Directory of Open Access Journals (Sweden)
Daniela ŞTEFĂNESCU
2011-08-01
Full Text Available The issues referring to official statistics quality and reliability became the main topics of debates as far as statistical governance in Europe is concerned. The Council welcomed the Commission Communication to the European Parliament and to the Council « Towards robust quality management for European Statistics » (COM 211, appreciating that the approach and the objective of the strategy would confer the European Statistical System (ESS the quality management framework for the coordination of consolidated economic policies. The Council pointed out that the European Statistical System management was improved during recent years, that progress was noticed in relation with high quality statistics production and dissemination within the European Union, but has also noticed that, in the context of recent financial crisis, certain weaknesses were identified, particularly related to quality management general framework.„Greece Case” proved that progresses were not enough for guaranteeing the complete independence of national statistical institutes and entailed the need for further consolidating ESS governance. Several undertakings are now in the preparatory stage, in accordance with the Commission Communication; these actions are welcomed, but the question arise: are these sufficient for definitively solving the problem?The paper aims to go ahead in the attempt of identifying a different way, innovative (courageous! on the long run, towards an advanced institutional structure of ESS, by setting up the European System of Statistical Institutes, similar to the European System of Central Banks, that would require a change in the Treaty.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-04-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.
Morrissey, L. A.; Weinstock, K. J.; Mouat, D. A.; Card, D. H.
1984-01-01
An evaluation of Thematic Mapper Simulator (TMS) data for the geobotanical discrimination of rock types based on vegetative cover characteristics is addressed in this research. A methodology for accomplishing this evaluation utilizing univariate and multivariate techniques is presented. TMS data acquired with a Daedalus DEI-1260 multispectral scanner were integrated with vegetation and geologic information for subsequent statistical analyses, which included a chi-square test, an analysis of variance, stepwise discriminant analysis, and Duncan's multiple range test. Results indicate that ultramafic rock types are spectrally separable from nonultramafics based on vegetative cover through the use of statistical analyses.
Jandrisevits, Carmen; Marschallinger, Robert
2014-05-01
Quarternary sediments in overdeepened alpine valleys and basins in the Eastern Alps bear substantial groundwater resources. The associated aquifer systems are generally geometrically complex with highly variable hydraulic properties. 3D geological models provide predictions of both geometry and properties of the subsurface required for subsequent modelling of groundwater flow and transport. In hydrology, geostatistical Kriging and Kriging based conditional simulations are widely used to predict the spatial distribution of hydrofacies. In the course of investigating the shallow aquifer structures in the Zell basin in the Upper Salzach valley (Salzburg, Austria), a benchmark of available geostatistical modelling and simulation methods was performed: traditional variogram based geostatistical methods, i.e. Indicator Kriging, Sequential Indicator Simulation and Sequential Indicator Co - Simulation were used as well as Multiple Point Statistics. The ~ 6 km2 investigation area is sampled by 56 drillings with depths of 5 to 50 m; in addition, there are 2 geophysical sections with lengths of 2 km and depths of 50 m. Due to clustered drilling sites, indicator Kriging models failed to consistently model the spatial variability of hydrofacies. Using classical variogram based geostatistical simulation (SIS), equally probable realizations were generated with differences among the realizations providing an uncertainty measure. The yielded models are unstructured from a geological point - they do not portray the shapes and lateral extensions of associated sedimentary units. Since variograms consider only two - point spatial correlations, they are unable to capture the spatial variability of complex geological structures. The Multiple Point Statistics approach overcomes these limitations of two point statistics as it uses a Training image instead of variograms. The 3D Training Image can be seen as a reference facies model where geological knowledge about depositional
Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects
Makowski, D.; Asseng, S.; Ewert, F.; Bassu, S.; Durand, J.L.; Martre, P.; Adam, M.; Aggarwal, P.K.; Angulo, C.; Baron, C.; Basso, B.; Bertuzzi, P.; Biernath, C.; Boogaard, H.; Boote, K.J.; Brisson, N.; Cammarano, D.; Challinor, A.J.; Conijn, J.G.; Corbeels, M.; Deryng, D.; Sanctis, De G.; Doltra, J.; Gayler, S.; Goldberg, R.; Grassini, P.; Hatfield, J.L.; Heng, L.; Hoek, S.B.; Hooker, J.; Hunt, L.A.; Ingwersen, J.; Izaurralde, C.; Jongschaap, R.E.E.; Jones, J.W.; Kemanian, R.A.; Kersebaum, K.C.; Kim, S.H.; Lizaso, J.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.J.; Olesen, J.E.; Osborne, T.M.; Palosuo, T.; Pravia, M.V.; Priesack, E.; Ripoche, D.; Rosenzweig, C.; Ruane, A.C.; Sau, F.; Semenov, M.A.; Shcherbak, I.; Steduto, P.; Stöckle, C.O.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Teixeira, E.; Thorburn, P.; Timlin, D.; Travasso, M.; Roetter, R.P.; Waha, K.; Wallach, D.; White, J.W.; Williams, J.R.; Wolf, J.
2015-01-01
Many simulation studies have been carried out to predict the effect of climate change on crop yield. Typically, in such study, one or several crop models are used to simulate series of crop yield values for different climate scenarios corresponding to different hypotheses of temperature, CO2
Ibrahim, Ichsan; Malasan, Hakim L.; Kunjaya, Chatief; Timur Jaelani, Anton; Puannandra Putri, Gerhana; Djamal, Mitra
2018-04-01
In astronomy, the brightness of a source is typically expressed in terms of magnitude. Conventionally, the magnitude is defined by the logarithm of received flux. This relationship is known as the Pogson formula. For received flux with a small signal to noise ratio (S/N), however, the formula gives a large magnitude error. We investigate whether the use of Inverse Hyperbolic Sine function (hereafter referred to as the Asinh magnitude) in the modified formulae could allow for an alternative calculation of magnitudes for small S/N flux, and whether the new approach is better for representing the brightness of that region. We study the possibility of increasing the detection level of gravitational microlensing using 40 selected microlensing light curves from the 2013 and 2014 seasons and by using the Asinh magnitude. Photometric data of the selected events are obtained from the Optical Gravitational Lensing Experiment (OGLE). We found that utilization of the Asinh magnitude makes the events brighter compared to using the logarithmic magnitude, with an average of about 3.42 × 10‑2 magnitude and an average in the difference of error between the logarithmic and the Asinh magnitude of about 2.21 × 10‑2 magnitude. The microlensing events OB140847 and OB140885 are found to have the largest difference values among the selected events. Using a Gaussian fit to find the peak for OB140847 and OB140885, we conclude statistically that the Asinh magnitude gives better mean squared values of the regression and narrower residual histograms than the Pogson magnitude. Based on these results, we also attempt to propose a limit in magnitude value for which use of the Asinh magnitude is optimal with small S/N data.
Prudencio, Ernesto E.; Schulz, Karl W.
2012-01-01
QUESO is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. It has been designed with three objectives: it should (a) be sufficiently
Rakesh, V.; Kantharao, B.
2017-03-01
Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events
Moving Up the CMMI Capability and Maturity Levels Using Simulation
National Research Council Canada - National Science Library
Raffo, David Mitchell; Wakeland, Wayne
2008-01-01
Process Simulation Modeling (PSIM) technology can be used to evaluate issues related to process strategy, process improvement, technology and tool adoption, project management and control, and process design...
Photon statistics in an N-level (N-1)-mode system
International Nuclear Information System (INIS)
Kozierowski, M.; Shumovskij, A.S.
1987-01-01
The characteristic and photon number distribution functions, the statistical moments of photon numbers and the correlations of modes are studied. The normally ordered variances of the photon numbers and the cross-correlation functions are calculated
The Use of Social Media for Communication In Official Statistics at European Level
Directory of Open Access Journals (Sweden)
Ionela-Roxana GLĂVAN
2016-12-01
Full Text Available Social media tools are wide spread in web communication and are gaining popularity in the communication process between public institutions and citizens. This study conducts an analysis on how social media is used by Official Statistical Institutes to interact with citizens and disseminate information. A linear regression technique is performed to examine which social media platforms (Twitter or Facebook is a more effective tool in the communication process in the official statistics area. Our study suggests that Twitter is a more powerful tool than Facebook in enhancing the relationship between official statistics and citizens, complying with several other studies. Next, we performed an analysis on Twitter network characteristics discussing “official statistics” using NodeXL that revealed the unexploited potential of this network by official statistical agencies.
Calculation of the transport processes in an ambipolar trap by direct statistic simulation
International Nuclear Information System (INIS)
Lysyanskij, P.B.; Tiunov, M.A.; Fomel', B.M.
1982-01-01
Plasma of an open magnetic trap is simulated with a set of test particles. Transverse drift movement of particles in axial-asymmetric magnetic fields is described with the method of finite transformations. Effects of collisions are simulated with arbitrary changes of velocity vectors of test particles which corresponds to their scattering with ''background'' plasma. The model takes account of longitudinal and transverse losses as well as atomic beam injection. The simulation permitted to obtain values and characteristics of longitudinal and transverse loss flows, ion temperature and radial profile of ma density in the central part of the ''AMBALplas'' ambipolar trap
Random Matrix Theory of the Energy-Level Statistics of Disordered Systems at the Anderson Transition
Canali, C. M.
1995-01-01
We consider a family of random matrix ensembles (RME) invariant under similarity transformations and described by the probability density $P({\\bf H})= \\exp[-{\\rm Tr}V({\\bf H})]$. Dyson's mean field theory (MFT) of the corresponding plasma model of eigenvalues is generalized to the case of weak confining potential, $V(\\epsilon)\\sim {A\\over 2}\\ln ^2(\\epsilon)$. The eigenvalue statistics derived from MFT are shown to deviate substantially from the classical Wigner-Dyson statistics when $A
The Use of Social Media for Communication In Official Statistics at European Level
Ionela-Roxana GLĂVAN; Andreea MIRICĂ; Bogdan Narcis FÎRȚESCU
2016-01-01
Social media tools are wide spread in web communication and are gaining popularity in the communication process between public institutions and citizens. This study conducts an analysis on how social media is used by Official Statistical Institutes to interact with citizens and disseminate information. A linear regression technique is performed to examine which social media platforms (Twitter or Facebook) is a more effective tool in the communication process in the official statistics area. O...
Low-level tank waste simulant data base
International Nuclear Information System (INIS)
Lokken, R.O.
1996-04-01
The majority of defense wastes generated from reprocessing spent N- Reactor fuel at Hanford are stored in underground Double-shell Tanks (DST) and in older Single-Shell Tanks (SST) in the form of liquids, slurries, sludges, and salt cakes. The tank waste remediation System (TWRS) Program has the responsibility of safely managing and immobilizing these tank wastes for disposal. This report discusses three principle topics: the need for and basis for selecting target or reference LLW simulants, tanks waste analyses and simulants that have been defined, developed, and used for the GDP and activities in support of preparing and characterizing simulants for the current LLW vitrification project. The procedures and the data that were generated to characterized the LLW vitrification simulants were reported and are presented in this report. The final section of this report addresses the applicability of the data to the current program and presents recommendations for additional data needs including characterization and simulant compositional variability studies
Sugiyama, K.; Nakajima, K.; Odaka, M.; Kuramoto, K.; Hayashi, Y.-Y.
2014-02-01
A series of long-term numerical simulations of moist convection in Jupiter’s atmosphere is performed in order to investigate the idealized characteristics of the vertical structure of multi-composition clouds and the convective motions associated with them, varying the deep abundances of condensable gases and the autoconversion time scale, the latter being one of the most questionable parameters in cloud microphysical parameterization. The simulations are conducted using a two-dimensional cloud resolving model that explicitly represents the convective motion and microphysics of the three cloud components, H2O, NH3, and NH4SH imposing a body cooling that substitutes the net radiative cooling. The results are qualitatively similar to those reported in Sugiyama et al. (Sugiyama, K. et al. [2011]. Intermittent cumulonimbus activity breaking the three-layer cloud structure of Jupiter. Geophys. Res. Lett. 38, L13201. doi:10.1029/2011GL047878): stable layers associated with condensation and chemical reaction act as effective dynamical and compositional boundaries, intense cumulonimbus clouds develop with distinct temporal intermittency, and the active transport associated with these clouds results in the establishment of mean vertical profiles of condensates and condensable gases that are distinctly different from the hitherto accepted three-layered structure (e.g., Atreya, S.K., Romani, P.N. [1985]. Photochemistry and clouds of Jupiter, Saturn and Uranus. In: Recent Advances in Planetary Meteorology. Cambridge Univ. Press, London, pp. 17-68). Our results also demonstrate that the period of intermittent cloud activity is roughly proportional to the deep abundance of H2O gas. The autoconversion time scale does not strongly affect the results, except for the vertical profiles of the condensates. Changing the autoconversion time scale by a factor of 100 changes the intermittency period by a factor of less than two, although it causes a dramatic increase in the amount of
Humeniuk, Stephan; Büchler, Hans Peter
2017-12-08
We present a method for computing the full probability distribution function of quadratic observables such as particle number or magnetization for the Fermi-Hubbard model within the framework of determinantal quantum Monte Carlo calculations. Especially in cold atom experiments with single-site resolution, such a full counting statistics can be obtained from repeated projective measurements. We demonstrate that the full counting statistics can provide important information on the size of preformed pairs. Furthermore, we compute the full counting statistics of the staggered magnetization in the repulsive Hubbard model at half filling and find excellent agreement with recent experimental results. We show that current experiments are capable of probing the difference between the Hubbard model and the limiting Heisenberg model.
Fraser, Cynthia
2016-01-01
The revised Fourth Edition of this popular textbook is redesigned with Excel 2016 to encourage business students to develop competitive advantages for use in their future careers as decision makers. Students learn to build models using logic and experience, produce statistics using Excel 2016 with shortcuts, and translate results into implications for decision makers. The textbook features new examples and assignments on global markets, including cases featuring Chipotle and Costco. Exceptional managers know that they can create competitive advantages by basing decisions on performance response under alternative scenarios, and managers need to understand how to use statistics to create such advantages. Statistics, from basic to sophisticated models, are illustrated with examples using real data such as students will encounter in their roles as managers. A number of examples focus on business in emerging global markets with particular emphasis on emerging markets in Latin America, China, and India. Results are...
Directory of Open Access Journals (Sweden)
M. Palmroth
2005-09-01
Full Text Available We investigate the Northern Hemisphere Joule heating from several observational and computational sources with the purpose of calibrating a previously identified functional dependence between solar wind parameters and ionospheric total energy consumption computed from a global magnetohydrodynamic (MHD simulation (Grand Unified Magnetosphere Ionosphere Coupling Simulation, GUMICS-4. In this paper, the calibration focuses on determining the amount and temporal characteristics of Northern Hemisphere Joule heating. Joule heating during a substorm is estimated from global observations, including electric fields provided by Super Dual Auroral Network (SuperDARN and Pedersen conductances given by the ultraviolet (UV and X-ray imagers on board the Polar satellite. Furthermore, Joule heating is assessed from several activity index proxies, large statistical surveys, assimilative data methods (AMIE, and the global MHD simulation GUMICS-4. We show that the temporal and spatial variation of the Joule heating computed from the GUMICS-4 simulation is consistent with observational and statistical methods. However, the different observational methods do not give a consistent estimate for the magnitude of the global Joule heating. We suggest that multiplying the GUMICS-4 total Joule heating by a factor of 10 approximates the observed Joule heating reasonably well. The lesser amount of Joule heating in GUMICS-4 is essentially caused by weaker Region 2 currents and polar cap potentials. We also show by theoretical arguments that multiplying independent measurements of averaged electric fields and Pedersen conductances yields an overestimation of Joule heating.
Keywords. Ionosphere (Auroral ionosphere; Modeling and forecasting; Electric fields and currents
Nowok, B.
2010-01-01
In today's globalized world, there is increasing demand for reliable and comparable statistics on international migration. This book contributes to a more profound understanding of the effect of definitional variations on the figures that are reported.The framework developed here for the
Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions
Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.
2013-01-01
Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of
Computer simulation of HTGR fuel microspheres using a Monte-Carlo statistical approach
International Nuclear Information System (INIS)
Hedrick, C.E.
1976-01-01
The concept and computational aspects of a Monte-Carlo statistical approach in relating structure of HTGR fuel microspheres to the uranium content of fuel samples have been verified. Results of the preliminary validation tests and the benefits to be derived from the program are summarized
Boyle, Elizabeth; MacArthur, Ewan; Connolly, Thomas; Hainey, Thomas; Kärki, Anne; Van Rosmalen, Peter
2014-01-01
Basic competence in research methods and statistics is core for many undergraduates but many students experience difﬁculties in acquiring knowledge and skills in this area. Interest has recently turned to serious games as providing engaging ways of learning. The CHERMUG project was developed against
Kleijnen, J.P.C.
2006-01-01
Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise.By definition, white noise is normally, independently, and identically distributed with zero mean.This survey tries to answer the following questions: (i) How realistic are these
International Nuclear Information System (INIS)
Calvin W. Johnson
2005-01-01
The general goal of the project is to develop and implement computer codes and input files to compute nuclear densities of state. Such densities are important input into calculations of statistical neutron capture, and are difficult to access experimentally. In particular, we will focus on calculating densities for nuclides in the mass range A ∼ 50-100. We use statistical spectroscopy, a moments method based upon a microscopic framework, the interacting shell model. Second year goals and milestones: Develop two or three competing interactions (based upon surface-delta, Gogny, and NN-scattering) suitable for application to nuclei up to A = 100. Begin calculations for nuclides with A = 50-70
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
International Nuclear Information System (INIS)
Rosa, B.; Parishani, H.; Ayala, O.; Wang, L.-P.
2015-01-01
In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynolds number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate
Directory of Open Access Journals (Sweden)
Masoud Ghodrati
2016-12-01
Full Text Available Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs’ power within theta frequency band (~3-7 Hz. This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception.
DEFF Research Database (Denmark)
Inwood, Jennifer; Lofi, Johanna; Davies, Sarah
2013-01-01
In this study, a novel application of a statistical approach is utilized for analysis of downhole logging data from Miocene-aged siliciclastic shelf sediments on the New Jersey Margin (eastern USA). A multivariate iterative nonhierarchical cluster analysis (INCA) of spectral gamma-ray logs from I...
Yang, Su
2005-02-01
A new descriptor for symbol recognition is proposed. 1) A histogram is constructed for every pixel to figure out the distribution of the constraints among the other pixels. 2) All the histograms are statistically integrated to form a feature vector with fixed dimension. The robustness and invariance were experimentally confirmed.
DEFF Research Database (Denmark)
Stoica, Iuliana-Madalina; Babamoradi, Hamid; van den Berg, Frans
2017-01-01
•A statistical strategy combining fluorescence spectroscopy, multivariate analysis and Wilks’ ratio is proposed.•The method was tested both off-line and on-line having riboflavin as a (controlled) contaminant.•Wilks’ ratio signals unusual recordings based on shifts in variance and covariance...... structure described in in-control data....
Replicate This! Creating Individual-Level Data from Summary Statistics Using R
Morse, Brendan J.
2013-01-01
Incorporating realistic data and research examples into quantitative (e.g., statistics and research methods) courses has been widely recommended for enhancing student engagement and comprehension. One way to achieve these ends is to use a data generator to emulate the data in published research articles. "MorseGen" is a free data generator that…
Asenov, Asen; Brown, A. R.; Slavcheva, G.; Davies, J. H.
2000-01-01
When MOSFETs are scaled to deep submicron dimensions the discreteness and randomness of the dopant charges in the channel region introduces significant fluctuations in the device characteristics. This effect, predicted 20 year ago, has been confirmed experimentally and in simulation studies. The impact of the fluctuations on the functionality, yield, and reliability of the corresponding systems shifts the paradigm of the numerical device simulation. It becomes insufficient to simulate only one device representing one macroscopical design in a continuous charge approximation. An ensemble of macroscopically identical but microscopically different devices has to be characterized by simulation of statistically significant samples. The aims of the numerical simulations shift from predicting the characteristics of a single device with continuous doping towards estimating the mean values and the standard deviations of basic design parameters such as threshold voltage, subthreshold slope, transconductance, drive current, etc. for the whole ensemble of 'atomistically' different devices in the system. It has to be pointed out that even the mean values obtained from 'atomistic' simulations are not identical to the values obtained from continuous doping simulations. In this paper we present a hierarchical approach to the 'atomistic' simulation of aggressively scaled decanano MOSFETs. A full scale 3D drift-diffusion'atomostic' simulation approach is first described and used for verification of the more economical, but also more restricted, options. To reduce the processor time and memory requirements at high drain voltage we have developed a self-consistent option based on a thin slab solution of the current continuity equation only in the channel region. This is coupled to the Poisson's equation solution in the whole simulation domain in the Gummel iteration cycles. The accuracy of this approach is investigated in comparison with the full self-consistent solution. At low drain
Directory of Open Access Journals (Sweden)
Yuanyuan Ma
2018-01-01
Full Text Available This paper focuses on the modeling, simulation, and experimental verification of wideband single-input single-output (SISO mobile fading channels for indoor propagation environments. The indoor reference channel model is derived from a geometrical rectangle scattering model, which consists of an infinite number of scatterers. It is assumed that the scatterers are exponentially distributed over the two-dimensional (2D horizontal plane of a rectangular room. Analytical expressions are derived for the probability density function (PDF of the angle of arrival (AOA, the PDF of the propagation path length, the power delay profile (PDP, and the frequency correlation function (FCF. An efficient sum-of-cisoids (SOC channel simulator is derived from the nonrealizable reference model by employing the SOC principle. It is shown that the SOC channel simulator approximates closely the reference model with respect to the FCF. The SOC channel simulator enables the performance evaluation of wideband indoor wireless communication systems with reduced realization expenditure. Moreover, the rationality and usefulness of the derived indoor channel model is confirmed by various measurements at 2.4, 5, and 60 GHz.
Andrew G. Bunn; Esther Jansma; Mikko Korpela; Robert D. Westfall; James Baldwin
2013-01-01
Mean sensitivity (ζ) continues to be used in dendrochronology despite a literature that shows it to be of questionable value in describing the properties of a time series. We simulate first-order autoregressive models with known parameters and show that ζ is a function of variance and autocorrelation of a time series. We then use 500 random tree-ring...
Bunn, A.G.; Jansma, E.; Korpela, M.; Westfall, R.D.; Baldwin, J.
2013-01-01
Mean sensitivity (ζ) continues to be used in dendrochronology despite a literature that shows it to be of questionable value in describing the properties of a time series. We simulate first-order autoregressive models with known parameters and show that ζ is a function of variance and
Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro
2014-05-01
This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if khthe aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.
Study of film data processing systems by means of a statistical simulation
International Nuclear Information System (INIS)
Deart, A.F.; Gromov, A.I.; Kapustinskaya, V.I.; Okorochenko, G.E.; Sychev, A.Yu.; Tatsij, L.I.
1974-01-01
Considered is a statistic model of the film information processing system. The given time diagrams illustrate the model operation algorithm. The program realizing this model of the system is described in detail. The elaborated program model has been tested at the film information processing system which represents a group of measuring devices operating in line with BESM computer. The obtained functioning quantitative characteristics of the system being tested permit to estimate the system operation efficiency
International Nuclear Information System (INIS)
Petersen, S.W.
1995-11-01
The State of Washington has recommended a specific statistical test procedure for identifying soil contamination at a potential waste site, referred to here as the State test. An alternative to this test has been presented (DOE/RL 1994), which uses the Wilcoxon Rank Sum, Quantile, and Hot Measurement Comparison tests (WQH). These same tests are recommended by the U.S. Environmental Protection Agency (EPA) to determine if soils from a waste site differ from site-specific, reference-based standards
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
Energy Technology Data Exchange (ETDEWEB)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.
International Nuclear Information System (INIS)
Guha, S.; Taylor, J.H.
1996-01-01
It is critical that summary statistics on background data, or background levels, be computed based on standardized and defensible statistical methods because background levels are frequently used in subsequent analyses and comparisons performed by separate analysts over time. The final background for naturally occurring radionuclide concentrations in soil at a RCRA facility, and the associated statistical methods used to estimate these concentrations, are presented. The primary objective is to describe, via a case study, the statistical methods used to estimate 95% upper tolerance limits (UTL) on radionuclide background soil data sets. A 95% UTL on background samples can be used as a screening level concentration in the absence of definitive soil cleanup criteria for naturally occurring radionuclides. The statistical methods are based exclusively on EPA guidance. This paper includes an introduction, a discussion of the analytical results for the radionuclides and a detailed description of the statistical analyses leading to the determination of 95% UTLs. Soil concentrations reported are based on validated data. Data sets are categorized as surficial soil; samples collected at depths from zero to one-half foot; and deep soil, samples collected from 3 to 5 feet. These data sets were tested for statistical outliers and underlying distributions were determined by using the chi-squared test for goodness-of-fit. UTLs for the data sets were then computed based on the percentage of non-detects and the appropriate best-fit distribution (lognormal, normal, or non-parametric). For data sets containing greater than approximately 50% nondetects, nonparametric UTLs were computed
International Nuclear Information System (INIS)
Zhang Youcai; Yang Xiaohu; Springel, Volker
2010-01-01
We study the topology of cosmic large-scale structure through the genus statistics, using galaxy catalogs generated from the Millennium Simulation and observational data from the latest Sloan Digital Sky Survey Data Release (SDSS DR7). We introduce a new method for constructing galaxy density fields and for measuring the genus statistics of its isodensity surfaces. It is based on a Delaunay tessellation field estimation (DTFE) technique that allows the definition of a piece-wise continuous density field and the exact computation of the topology of its polygonal isodensity contours, without introducing any free numerical parameter. Besides this new approach, we also employ the traditional approaches of smoothing the galaxy distribution with a Gaussian of fixed width, or by adaptively smoothing with a kernel that encloses a constant number of neighboring galaxies. Our results show that the Delaunay-based method extracts the largest amount of topological information. Unlike the traditional approach for genus statistics, it is able to discriminate between the different theoretical galaxy catalogs analyzed here, both in real space and in redshift space, even though they are based on the same underlying simulation model. In particular, the DTFE approach detects with high confidence a discrepancy of one of the semi-analytic models studied here compared with the SDSS data, while the other models are found to be consistent.
Energy Technology Data Exchange (ETDEWEB)
Vervisch, Luc; Domingo, Pascale; Lodato, Guido [CORIA - CNRS and INSA de Rouen, Technopole du Madrillet, BP 8, 76801 Saint-Etienne-du-Rouvray (France); Veynante, Denis [EM2C - CNRS and Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry (France)
2010-04-15
Large-Eddy Simulation (LES) provides space-filtered quantities to compare with measurements, which usually have been obtained using a different filtering operation; hence, numerical and experimental results can be examined side-by-side in a statistical sense only. Instantaneous, space-filtered and statistically time-averaged signals feature different characteristic length-scales, which can be combined in dimensionless ratios. From two canonical manufactured turbulent solutions, a turbulent flame and a passive scalar turbulent mixing layer, the critical values of these ratios under which measured and computed variances (resolved plus sub-grid scale) can be compared without resorting to additional residual terms are first determined. It is shown that actual Direct Numerical Simulation can hardly accommodate a sufficiently large range of length-scales to perform statistical studies of LES filtered reactive scalar-fields energy budget based on sub-grid scale variances; an estimation of the minimum Reynolds number allowing for such DNS studies is given. From these developments, a reliability mesh criterion emerges for scalar LES and scaling for scalar sub-grid scale energy is discussed. (author)
Monte Carlo simulation of a statistical mechanical model of multiple protein sequence alignment.
Kinjo, Akira R
2017-01-01
A grand canonical Monte Carlo (MC) algorithm is presented for studying the lattice gas model (LGM) of multiple protein sequence alignment, which coherently combines long-range interactions and variable-length insertions. MC simulations are used for both parameter optimization of the model and production runs to explore the sequence subspace around a given protein family. In this Note, I describe the details of the MC algorithm as well as some preliminary results of MC simulations with various temperatures and chemical potentials, and compare them with the mean-field approximation. The existence of a two-state transition in the sequence space is suggested for the SH3 domain family, and inappropriateness of the mean-field approximation for the LGM is demonstrated.
Kim, Daniel; Griffin, Beth Ann; Kabeto, Mohammed; Escarce, José; Langa, Kenneth M; Shih, Regina A
2016-01-01
Much variation in individual-level cognitive function in late life remains unexplained, with little exploration of area-level/contextual factors to date. Income inequality is a contextual factor that may plausibly influence cognitive function. In a nationally-representative cohort of older Americans from the Health and Retirement Study, we examined state- and metropolitan statistical area (MSA)-level income inequality as predictors of individual-level cognitive function measured by the 27-point Telephone Interview for Cognitive Status (TICS-m) scale. We modeled latency periods of 8-20 years, and controlled for state-/metropolitan statistical area (MSA)-level and individual-level factors. Higher MSA-level income inequality predicted lower cognitive function 16-18 years later. Using a 16-year lag, living in a MSA in the highest income inequality quartile predicted a 0.9-point lower TICS-m score (β = -0.86; 95% CI = -1.41, -0.31), roughly equivalent to the magnitude associated with five years of aging. We observed no associations for state-level income inequality. The findings were robust to sensitivity analyses using propensity score methods. Among older Americans, MSA-level income inequality appears to influence cognitive function nearly two decades later. Policies reducing income inequality levels within cities may help address the growing burden of declining cognitive function among older populations within the United States.
Directory of Open Access Journals (Sweden)
Alicja P. Sobańtka
2014-01-01
Full Text Available Extended statistical entropy analysis (eSEA is used to assess the nitrogen (N removal performance of the wastewater treatment (WWT simulation software, the Benchmarking Simulation Model No. 2 (BSM No. 2 . Six simulations with three different types of wastewater are carried out, which vary in the dissolved oxygen concentration (O2,diss. during the aerobic treatment. N2O emissions generated during denitrification are included in the model. The N-removal performance is expressed as reduction in statistical entropy, ΔH, compared to the hypothetical reference situation of direct discharge of the wastewater into the river. The parameters chemical and biological oxygen demand (COD, BOD and suspended solids (SS are analogously expressed in terms of reduction of COD, BOD, and SS, compared to a direct discharge of the wastewater to the river (ΔEQrest. The cleaning performance is expressed as ΔEQnew, the weighted average of ΔH and ΔEQrest. The results show that ΔEQnew is a more comprehensive indicator of the cleaning performance because, in contrast to the traditional effluent quality index (EQ, it considers the characteristics of the wastewater, includes all N-compounds and their distribution in the effluent, the off-gas, and the sludge. Furthermore, it is demonstrated that realistically expectable N2O emissions have only a moderate impact on ΔEQnew.
Beyond "The Total Organization": A Graduate-Level Simulation
Kane, Kathleen R.; Goldgehn, Leslie A.
2011-01-01
This simulation is designed to help students understand the complexity of organizational life and learn how to navigate a work world of chaos, conflict, and uncertainty. This adaptation and update of an exercise by Cohen, Fink, Gadon, and Willits has been a successful addition to MBA and EMBA courses. The participants must self-organize, choose…
Wagler, Amy E.; Lesser, Lawrence M.; González, Ariel I.; Leal, Luis
2015-01-01
A corpus of current editions of statistics textbooks was assessed to compare aspects and levels of readability for the topics of "measures of center," "line of fit," "regression analysis," and "regression inference." Analysis with lexical software of these text selections revealed that the large corpus can…
Prudencio, Ernesto E.
2012-01-01
QUESO is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. It has been designed with three objectives: it should (a) be sufficiently abstract in order to handle a large spectrum of models, (b) be algorithmically extensible, allowing an easy insertion of new and improved algorithms, and (c) take advantage of parallel computing, in order to handle realistic models. Such objectives demand a combination of an object-oriented design with robust software engineering practices. QUESO is written in C++, uses MPI, and leverages libraries already available to the scientific community. We describe some UQ concepts, present QUESO, and list planned enhancements.
Simulation analysis of air flow and turbulence statistics in a rib grit roughened duct.
Vogiatzis, I I; Denizopoulou, A C; Ntinas, G K; Fragos, V P
2014-01-01
The implementation of variable artificial roughness patterns on a surface is an effective technique to enhance the rate of heat transfer to fluid flow in the ducts of solar air heaters. Different geometries of roughness elements investigated have demonstrated the pivotal role that vortices and associated turbulence have on the heat transfer characteristics of solar air heater ducts by increasing the convective heat transfer coefficient. In this paper we investigate the two-dimensional, turbulent, unsteady flow around rectangular ribs of variable aspect ratios by directly solving the transient Navier-Stokes and continuity equations using the finite elements method. Flow characteristics and several aspects of turbulent flow are presented and discussed including velocity components and statistics of turbulence. The results reveal the impact that different rib lengths have on the computed mean quantities and turbulence statistics of the flow. The computed turbulence parameters show a clear tendency to diminish downstream with increasing rib length. Furthermore, the applied numerical method is capable of capturing small-scale flow structures resulting from the direct solution of Navier-Stokes and continuity equations.
Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling
Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.
2010-01-01
NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand
Statistical simulations of the dust foreground to cosmic microwave background polarization
Vansyngel, F.; Boulanger, F.; Ghosh, T.; Wandelt, B.; Aumont, J.; Bracco, A.; Levrier, F.; Martin, P. G.; Montier, L.
2017-07-01
The characterization of the dust polarization foreground to the cosmic microwave background (CMB) is a necessary step toward the detection of the B-mode signal associated with primordial gravitational waves. We present a method to simulate maps of polarized dust emission on the sphere that is similar to the approach used for CMB anisotropies. This method builds on the understanding of Galactic polarization stemming from the analysis of Planck data. It relates the dust polarization sky to the structure of the Galactic magnetic field and its coupling with interstellar matter and turbulence. The Galactic magnetic field is modeled as a superposition of a mean uniform field and a Gaussian random (turbulent) component with a power-law power spectrum of exponent αM. The integration along the line of sight carried out to compute Stokes maps is approximated by a sum over a small number of emitting layers with different realizations of the random component of the magnetic field. The model parameters are constrained to fit the power spectra of dust polarization EE, BB, and TE measured using Planck data. We find that the slopes of the E and B power spectra of dust polarization are matched for αM = -2.5, an exponent close to that measured for total dust intensity but larger than the Kolmogorov exponent - 11/3. The model allows us to compute multiple realizations of the Stokes Q and U maps for different realizations of the random component of the magnetic field, and to quantify the variance of dust polarization spectra for any given sky area outside of the Galactic plane. The simulations reproduce the scaling relation between the dust polarization power and the mean total dust intensity including the observed dispersion around the mean relation. We also propose a method to carry out multifrequency simulations, including the decorrelation measured recently by Planck, using a given covariance matrix of the polarization maps. These simulations are well suited to optimize
Numerical Simulation of Flood Levels for Tropical Rivers
International Nuclear Information System (INIS)
Mohammed, Thamer Ahmed; Said, Salim; Bardaie, Mohd Zohadie; Basri, Shah Nor
2011-01-01
Flood forecasting is important for flood damage reduction. As a result of advances in the numerical methods and computer technologies, many mathematical models have been developed and used for hydraulic simulation of the flood. These simulations usually include the prediction of the flood width and depth along a watercourse. Results obtained from the application of hydraulic models will help engineers to take precautionary measures to minimize flood damage. Hydraulic models were used to simulate the flood can be classified into dynamic hydraulic models and static hydraulic models. The HEC-2 static hydraulic model was used to predict water surface profiles for Linggi river and Langat river in Malaysia. The model is based on the numerical solution of the one dimensional energy equation of the steady gradually varied flow using the iteration technique. Calibration and verification of the HEC-2 model were conducted using the recorded data for both rivers. After calibration, the model was applied to predict the water surface profiles for Q10, Q30, and Q100 along the watercourse of the Linggi river. The water surface profile for Q200 for Langat river was predicted. The predicted water surface profiles were found in agreement with the recorded water surface profiles. The value of the maximum computed absolute error in the predicted water surface profile was found to be 500 mm while the minimum absolute error was 20 mm only.
Statistics of A-weighted road traffic noise levels in shielded urban areas
Forssén, J.; Hornikx, M.C.J.
2006-01-01
In the context of community noise and its negative effects, the noise descriptors used are usually long-term equivalent levels and, sometimes, maximum levels. An improved description could be achieved by including the time variations of the noise. Here, the time variations of A-weighted road traffic
Directory of Open Access Journals (Sweden)
Rossi A. Hassad
2011-07-01
Full Text Available This study examined the teaching practices of 227 college instructors of introductory statistics from the health and behavioral sciences. Using primarily multidimensional scaling (MDS techniques, a two-dimensional, 10-item teaching-practice scale, TISS (Teaching of Introductory Statistics Scale, was developed. The two dimensions (subscales are characterized as constructivist and behaviorist; they are orthogonal. Criterion validity of the TISS was established in relation to instructors’ attitude toward teaching, and acceptable levels of reliability were obtained. A significantly higher level of behaviorist practice (less reform-oriented was reported by instructors from the U.S., as well as instructors with academic degrees in mathematics and engineering, whereas those with membership in professional organizations, tended to be more reform-oriented (or constructivist. The TISS, thought to be the first of its kind, will allow the statistics education community to empirically assess and describe the pedagogical approach (teaching practice of instructors of introductory statistics in the health and behavioral sciences, at the college level, and determine what learning outcomes result from the different teaching-practice orientations. Further research is required in order to be conclusive about the structural and psychometric properties of this scale, including its stability over time.
dos Santos, G. J.; Linares, D. H.; Ramirez-Pastor, A. J.
2018-04-01
The phase behaviour of aligned rigid rods of length k (k-mers) adsorbed on two-dimensional square lattices has been studied by Monte Carlo (MC) simulations and histogram reweighting technique. The k-mers, containing k identical units (each one occupying a lattice site) were deposited along one of the directions of the lattice. In addition, attractive lateral interactions were considered. The methodology was applied, particularly, to the study of the critical point of the condensation transition occurring in the system. The process was monitored by following the fourth order Binder cumulant as a function of temperature for different lattice sizes. The results, obtained for k ranging from 2 to 7, show that: (i) the transition coverage exhibits a decreasing behaviour when it is plotted as a function of the k-mer size and (ii) the transition temperature, Tc, exhibits a power law dependence on k, Tc ∼k 0 , 4, shifting to higher values as k increases. Comparisons with an analytical model based on a generalization of the Bragg-Williams approximation (BWA) were performed in order to support the simulation technique. A significant qualitative agreement was obtained between BWA and MC results.
Liu, Zhichao; Zhao, Yunjie; Zeng, Chen; Computational Biophysics Lab Team
As the main protein of the bacterial flagella, flagellin plays an important role in perception and defense response. The newly discovered locus, FLS2, is ubiquitously expressed. FLS2 encodes a putative receptor kinase and shares many homologies with some plant resistance genes and even with some components of immune system of mammals and insects. In Arabidopsis, FLS2 perception is achieved by the recognition of epitope flg22, which induces FLS2 heteromerization with BAK1 and finally the plant immunity. Here we use both analytical methods such as Direct Coupling Analysis (DCA) and Molecular Dynamics (MD) Simulations to get a better understanding of the defense mechanism of FLS2. This may facilitate a redesign of flg22 or de-novo design for desired specificity and potency to extend the immune properties of FLS2 to other important crops and vegetables.
International Nuclear Information System (INIS)
Eberhardt, L.L.; Thomas, J.M.
1986-07-01
This project was designed to develop guidance for implementing 10 CFR Part 61 and to determine the overall needs for sampling and statistical work in characterizing, surveying, monitoring, and closing commercial low-level waste sites. When cost-effectiveness and statistical reliability are of prime importance, then double sampling, compositing, and stratification (with optimal allocation) are identified as key issues. If the principal concern is avoiding questionable statistical practice, then the applicability of kriging (for assessing spatial pattern), methods for routine monitoring, and use of standard textbook formulae in reporting monitoring results should be reevaluated. Other important issues identified include sampling for estimating model parameters and the use of data from left-censored (less than detectable limits) distributions
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
On the relation between the statistical γ-decay and the level density in 162Dy
International Nuclear Information System (INIS)
Henden, L.; Bergholt, L.; Guttormsen, M.; Rekstad, J.; Tveter, T.S.
1994-12-01
The level density of low-spin states (0-10ℎ) in 162 Dy has been determined from the ground state up to approximately 6 MeV of excitation energy. Levels in the excitation region up to 8 MeV were populated by means of the 163 Dy( 3 He, α) reaction, and the first-generation γ-rays in the decay of these states has been isolated. The energy distribution of the first-generation γ-rays provides a new source of information about the nuclear level density over a wide energy region. A broad peak is observed in the first-generation spectra, and the authors suggest an interpretation in terms of enhanced M1 transitions between different high-j Nilsson orbitals. 30 refs., 9 figs., 2 tabs
Statistical simulation of ensembles of precipitation fields for data assimilation applications
Haese, Barbara; Hörning, Sebastian; Chwala, Christian; Bárdossy, András; Schalge, Bernd; Kunstmann, Harald
2017-04-01
The simulation of the hydrological cycle by models is an indispensable tool for a variety of environmental challenges such as climate prediction, water resources management, or flood forecasting. One of the crucial variables within the hydrological system, and accordingly one of the main drivers for terrestrial hydrological processes, is precipitation. A correct reproduction of the spatio-temporal distribution of precipitation is crucial for the quality and performance of hydrological applications. In our approach we stochastically generate precipitation fields conditioned on various precipitation observations. Rain gauges provide high-quality information for a specific measurement point, but their spatial representativeness is often rare. Microwave links, e. g. from commercial cellular operators, on the other hand can be used to estimate line integrals of near-surface rainfall information. They provide a very dense observational system compared to rain gauges. A further prevalent source of precipitation information are weather radars, which provide rainfall pattern informations. In our approach we derive precipitation fields, which are conditioned on combinations of these different observation types. As method to generate precipitation fields we use the random mixing method. Following this method a precipitation field is received as a linear combination of unconditional spatial random fields, where the spatial dependence structure is described by copulas. The weights of the linear combination are chosen in the way that the observations and the spatial structure of precipitation are reproduced. One main advantage of the random mixing method is the opportunity to consider linear and non-linear constraints. For a demonstration of the method we use virtual observations generated from a virtual reality of the Neckar catchment. These virtual observations mimic advantages and disadvantages of real observations. This virtual data set allows us to evaluate simulated
Huttary, Rudolf; Goubergrits, Leonid; Schütte, Christof; Bernhard, Stefan
2017-08-01
It has not yet been possible to obtain modeling approaches suitable for covering a wide range of real world scenarios in cardiovascular physiology because many of the system parameters are uncertain or even unknown. Natural variability and statistical variation of cardiovascular system parameters in healthy and diseased conditions are characteristic features for understanding cardiovascular diseases in more detail. This paper presents SISCA, a novel software framework for cardiovascular system modeling and its MATLAB implementation. The framework defines a multi-model statistical ensemble approach for dimension reduced, multi-compartment models and focuses on statistical variation, system identification and patient-specific simulation based on clinical data. We also discuss a data-driven modeling scenario as a use case example. The regarded dataset originated from routine clinical examinations and comprised typical pre and post surgery clinical data from a patient diagnosed with coarctation of aorta. We conducted patient and disease specific pre/post surgery modeling by adapting a validated nominal multi-compartment model with respect to structure and parametrization using metadata and MRI geometry. In both models, the simulation reproduced measured pressures and flows fairly well with respect to stenosis and stent treatment and by pre-treatment cross stenosis phase shift of the pulse wave. However, with post-treatment data showing unrealistic phase shifts and other more obvious inconsistencies within the dataset, the methods and results we present suggest that conditioning and uncertainty management of routine clinical data sets needs significantly more attention to obtain reasonable results in patient-specific cardiovascular modeling. Copyright © 2017 Elsevier Ltd. All rights reserved.
Więckowska, Barbara; Marcinkowska, Justyna
2017-11-06
When searching for epidemiological clusters, an important tool can be to carry out one's own research with the incidence rate from the literature as the reference level. Values exceeding this level may indicate the presence of a cluster in that location. This paper presents a method of searching for clusters that have significantly higher incidence rates than those specified by the investigator. The proposed method uses the classic binomial exact test for one proportion and an algorithm that joins areas with potential clusters while reducing the number of multiple comparisons needed. The sensitivity and specificity are preserved by this new method, while avoiding the Monte Carlo approach and still delivering results comparable to the commonly used Kulldorff's scan statistics and other similar methods of localising clusters. A strong contributing factor afforded by the statistical software that makes this possible is that it allows analysis and presentation of the results cartographically.
Sundberg, R.; Moberg, A.; Hind, A.
2012-08-01
A statistical framework for comparing the output of ensemble simulations from global climate models with networks of climate proxy and instrumental records has been developed, focusing on near-surface temperatures for the last millennium. This framework includes the formulation of a joint statistical model for proxy data, instrumental data and simulation data, which is used to optimize a quadratic distance measure for ranking climate model simulations. An essential underlying assumption is that the simulations and the proxy/instrumental series have a shared component of variability that is due to temporal changes in external forcing, such as volcanic aerosol load, solar irradiance or greenhouse gas concentrations. Two statistical tests have been formulated. Firstly, a preliminary test establishes whether a significant temporal correlation exists between instrumental/proxy and simulation data. Secondly, the distance measure is expressed in the form of a test statistic of whether a forced simulation is closer to the instrumental/proxy series than unforced simulations. The proposed framework allows any number of proxy locations to be used jointly, with different seasons, record lengths and statistical precision. The goal is to objectively rank several competing climate model simulations (e.g. with alternative model parameterizations or alternative forcing histories) by means of their goodness of fit to the unobservable true past climate variations, as estimated from noisy proxy data and instrumental observations.
BML and MSDL for multi-level simulations
Ruiz, J.; Désert, D.; Hubervic, A.; Guillou, P.; Jansen, R.E.J.; Reus, N. de; Henderson, H.C.; Fauske, K.M.; Olsson, L.
2013-01-01
Military training needs to reflect the complexity of real-world operations. The main training audience is currently often focused at a certain level (e.g. joint headquarters staff, component headquarters staff, platoon leader, individual combatant), but it will typically include some interactions
Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme
2013-04-01
Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.
Statistical Evaluation of the Emissions Level Of CO, CO2 and HC Generated by Passenger Cars
Directory of Open Access Journals (Sweden)
Claudiu Ursu
2014-12-01
Full Text Available This paper aims to make an evaluation of differences emission level of CO, CO2 and HC generated by passenger cars in different walking regimes and times, to identify measures of reducing pollution. Was analyzed a sample of Dacia Logan passenger cars (n = 515, made during the period 2004-2007, equipped with spark ignition engines, assigned to emission standards EURO 3 (E3 and EURO4 (E4. These cars were evaluated at periodical technical inspection (ITP by two times in the two walk regimes (slow idle and accelerated idle. Using the t test for paired samples (Paired Samples T Test, the results showed that there are significant differences between emissions levels (CO, CO2, HC generated by Dacia Logan passenger cars at both assessments, and regression analysis showed that these differences are not significantly influenced by turnover differences.
Statistical equilibrium in cometary C2. IV. A 10 level model including singlet-triplet transitions
International Nuclear Information System (INIS)
Krishna Swamy, K.S.; O'dell, C.R.; Rice Univ., Houston, TX)
1987-01-01
Resonance fluorescence theory was used to calculate the population distribution in the energy states of the C2 molecule in comets. Ten electronic states, each with 14 vibrational states, were used in the calculations. These new calculations differ from earlier work in terms of additional electronic levels and the role of singlet-triplet transitions between the b and X levels. Since transition moments are not known, calculations are made of observable flux ratios for an array of possible values. Comparison with existing observations indicates that the a-X transition is very important, and there is marginal indication that the b-X transition is present. Swan band sequence flux ratios at large heliocentric distance are needed, as are accurate Mulliken/Swan and Phillips/Ballik-Ramsay (1963) observations. 29 references
Zhang, Yi; Zhao, Yanxia; Wang, Chunyi; Chen, Sining
2017-11-01
Assessment of the impact of climate change on crop productions with considering uncertainties is essential for properly identifying and decision-making agricultural practices that are sustainable. In this study, we employed 24 climate projections consisting of the combinations of eight GCMs and three emission scenarios representing the climate projections uncertainty, and two crop statistical models with 100 sets of parameters in each model representing parameter uncertainty within the crop models. The goal of this study was to evaluate the impact of climate change on maize ( Zea mays L.) yield at three locations (Benxi, Changling, and Hailun) across Northeast China (NEC) in periods 2010-2039 and 2040-2069, taking 1976-2005 as the baseline period. The multi-models ensembles method is an effective way to deal with the uncertainties. The results of ensemble simulations showed that maize yield reductions were less than 5 % in both future periods relative to the baseline. To further understand the contributions of individual sources of uncertainty, such as climate projections and crop model parameters, in ensemble yield simulations, variance decomposition was performed. The results indicated that the uncertainty from climate projections was much larger than that contributed by crop model parameters. Increased ensemble yield variance revealed the increasing uncertainty in the yield simulation in the future periods.
Simulating the DIRCM engagement: component and system level performance
CSIR Research Space (South Africa)
Willers, CJ
2012-11-01
Full Text Available , spectrally matched, pyrophoric. Jammers with arc lamps and steered (directional) beams. Early directional laser beams: soft- kill - jamming & dazzling. Single head, complex gimbals. Advanced direction- al laser beams: hard kill - power levels cause damage... countermeasures (DIRCM), employing high power lamps or lasers as sources of infrared energy. The larger aircraft self-protection scenario, comprising the missile, aircraft and DIRCM hardware is a complex system. In this system, each component presents major...
Simulating European wind power generation applying statistical downscaling to reanalysis data
DEFF Research Database (Denmark)
Gonzalez-Aparicio, I.; Monforti, F.; Volker, Patrick
2017-01-01
generation time series dataset for the EU-28 and neighbouring countries at hourly intervals and at different geographical aggregation levels (country, bidding zone and administrative territorial unit), for a 30 year period taking into account the wind generating fleet at the end of 2015. (C) 2017 The Authors...... and characteristics of the wind resource which is related to the accuracy of the approach in converting wind speed data into power values. One of the main factors contributing to the uncertainty in these conversion methods is the selection of the spatial resolution. Although numerical weather prediction models can...... could not be captured by the use of a reanalysis technique and could be translated into misinterpretations of the wind power peaks, ramping capacities, the behaviour of power prices, as well as bidding strategies for the electricity market. This study contributes to the understanding what is captured...
Starace, Fabrizio; Mungai, Francesco; Barbui, Corrado
2018-01-01
In mental healthcare, one area of major concern identified by health information systems is variability in antipsychotic prescribing. While most studies have investigated patient- and prescriber-related factors as possible reasons for such variability, no studies have investigated facility-level characteristics. The present study ascertained whether staffing level is associated with antipsychotic prescribing in community mental healthcare. A cross-sectional analysis of data extracted from the Italian national mental health information system was carried out. For each Italian region, it collects data on the availability and use of mental health facilities. The rate of individuals exposed to antipsychotic drugs was tested for evidence of association with the rate of mental health staff availability by means of univariate and multivariate analyses. In Italy there were on average nearly 60 mental health professionals per 100,000 inhabitants, with wide regional variations (range 21 to 100). The average rate of individuals prescribed antipsychotic drugs was 2.33%, with wide regional variations (1.04% to 4.01%). Univariate analysis showed that the rate of individuals prescribed antipsychotic drugs was inversely associated with the rate of mental health professionals available in Italian regions (Kendall's tau -0.438, p = 0.006), with lower rates of antipsychotic prescriptions in regions with higher rates of mental health professionals. After adjustment for possible confounders, the total availability of mental health professionals was still inversely associated with the rate of individuals exposed to antipsychotic drugs. The evidence that staffing level was inversely associated with antipsychotic prescribing indicates that any actions aimed at decreasing variability in antipsychotic prescribing need to take into account aspects related to the organization of the mental health system.
Development of Simulants to Support Mixing Tests for High Level Waste and Low Activity Waste
International Nuclear Information System (INIS)
EIBLING, RUSSELLE.
2004-01-01
The objectives of this study were to develop two different types of simulants to support vendor agitator design studies and mixing studies. The initial simulant development task was to develop rheologically-bounding physical simulants and the final portion was to develop a nominal chemical simulant which is designed to match, as closely as possible, the actual sludge from a tank. The physical simulants to be developed included a lower and upper rheologically bounded: pretreated low activity waste (LAW) physical simulant; LAW melter feed physical simulant; pretreated high level waste (HLW) physical simulant; HLW melter feed physical simulant. The nominal chemical simulant, hereafter referred to as the HLW Precipitated Hydroxide simulant, is designed to represent the chemical/physical composition of the actual washed and leached sludge sample. The objective was to produce a simulant which matches not only the chemical composition but also the physical properties of the actual waste sample. The HLW Precipitated Hydroxide simulant could then be used for mixing tests to validate mixing, homogeneity and representative sampling and transferring issues. The HLW Precipitated Hydroxide simulant may also be used for integrated nonradioactive testing of the WTP prior to radioactive operation
Southern hemisphere low level wind circulation statistics from the Seasat scatterometer
Levy, Gad
1994-01-01
Analyses of remotely sensed low-level wind vector data over the Southern Ocean are performed. Five-day averages and monthly means are created and the month-to-month variability during the winter (July-September) of 1978 is investigated. The remotely sensed winds are compared to the Australian Bureau of Meteorology (ABM) and the National Meteorological Center (NMC) surface analyses. In southern latitudes the remotely sensed winds are stronger than what the weather services' analyses suggest, indicating under-estimation by ABM and NMC in these regions. The evolution of the low-level jet and the major stormtracks during the season are studied and different flow regimes are identified. The large-scale variability of the meridional flow is studied with the aid of empirical orthogonal function (EOF) analysis. The dominance of quasi-stationary wave numbers 3,4, and 5 in the winter flows is evident in both the EOF analysis and the mean flow. The signature of an exceptionally strong blocking situation is evident in July and the special conditions leading to it are discussed. A very large intraseasonal variability with different flow regimes at different months is documented.
Longitudinal review of state-level accident statistics for carriers of interstate freight
International Nuclear Information System (INIS)
Saricks, C.; Kvitek, T.
1994-03-01
State-level accident rates by mode of freight transport have been developed and refined for application to the US Department of Energy's (DOE's) environmental mitigation program, which may involve large-quantity shipments of hazardous and mixed wastes from DOE facilities. These rates reflect multi-year data for interstate-registered highway earners, American Association of Railroads member carriers, and coastal and internal waterway barge traffic. Adjustments have been made to account for the share of highway combination-truck traffic actually attributable to interstate-registered carriers and for duplicate or otherwise inaccurate entries in the public-use accident data files used. State-to-state variation in rates is discussed, as is the stability of rates over time. Computed highway rates have been verified with actual carriers of high- and low-level nuclear materials, and the most recent truck accident data have been used, to ensure that the results are of the correct order of magnitude. Study conclusions suggest that DOE use the computed rates for the three modes until (1) improved estimation techniques for highway combination-truck miles by state become available; (2) continued evolution of the railroad industry significantly increases the consolidation of interstate rail traffic onto fewer high-capacity trunk lines; or (3) a large-scale off-site waste shipment campaign is imminent
Hong, Sun Suk; Lee, Jong-Woong; Seo, Jeong Beom; Jung, Jae-Eun; Choi, Jiwon; Kweon, Dae Cheol
2013-12-01
The purpose of this research is to determine the adaptive statistical iterative reconstruction (ASIR) level that enables optimal image quality and dose reduction in the chest computed tomography (CT) protocol with ASIR. A chest phantom with 0-50 % ASIR levels was scanned and then noise power spectrum (NPS), signal and noise and the degree of distortion of peak signal-to-noise ratio (PSNR) and the root-mean-square error (RMSE) were measured. In addition, the objectivity of the experiment was measured using the American College of Radiology (ACR) phantom. Moreover, on a qualitative basis, five lesions' resolution, latitude and distortion degree of chest phantom and their compiled statistics were evaluated. The NPS value decreased as the frequency increased. The lowest noise and deviation were at the 20 % ASIR level, mean 126.15 ± 22.21. As a result of the degree of distortion, signal-to-noise ratio and PSNR at 20 % ASIR level were at the highest value as 31.0 and 41.52. However, maximum absolute error and RMSE showed the lowest deviation value as 11.2 and 16. In the ACR phantom study, all ASIR levels were within acceptable allowance of guidelines. The 20 % ASIR level performed best in qualitative evaluation at five lesions of chest phantom as resolution score 4.3, latitude 3.47 and the degree of distortion 4.25. The 20 % ASIR level was proved to be the best in all experiments, noise, distortion evaluation using ImageJ and qualitative evaluation of five lesions of a chest phantom. Therefore, optimal images as well as reduce radiation dose would be acquired when 20 % ASIR level in thoracic CT is applied.
An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation
Reid, Michael R.; Powers, Edward I. (Technical Monitor)
2000-01-01
The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.
Wiß, Felix; Stacke, Tobias; Hagemann, Stefan
2014-05-01
Soil moisture and its memory can have a strong impact on near surface temperature and precipitation and have the potential to promote severe heat waves, dry spells and floods. To analyze how soil moisture is simulated in recent general circulation models (GCMs), soil moisture data from a 23 model ensemble of Atmospheric Model Intercomparison Project (AMIP) type simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) are examined for the period 1979 to 2008 with regard to parameterization and statistical characteristics. With respect to soil moisture processes, the models vary in their maximum soil and root depth, the number of soil layers, the water-holding capacity, and the ability to simulate freezing which all together leads to very different soil moisture characteristics. Differences in the water-holding capacity are resulting in deviations in the global median soil moisture of more than one order of magnitude between the models. In contrast, the variance shows similar absolute values when comparing the models to each other. Thus, the input and output rates by precipitation and evapotranspiration, which are computed by the atmospheric component of the models, have to be in the same range. Most models simulate great variances in the monsoon areas of the tropics and north western U.S., intermediate variances in Europe and eastern U.S., and low variances in the Sahara, continental Asia, and central and western Australia. In general, the variance decreases with latitude over the high northern latitudes. As soil moisture trends in the models were found to be negligible, the soil moisture anomalies were calculated by subtracting the 30 year monthly climatology from the data. The length of the memory is determined from the soil moisture anomalies by calculating the first insignificant autocorrelation for ascending monthly lags (insignificant autocorrelation folding time). The models show a great spread of autocorrelation length from a few months in
Gottwald, Georg; Melbourne, Ian
2013-04-01
Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.
Charogiannis, Alexandros; Denner, Fabian; van Wachem, Berend G. M.; Kalliadasis, Serafim; Markides, Christos N.
2017-12-01
We scrutinize the statistical characteristics of liquid films flowing over an inclined planar surface based on film height and velocity measurements that are recovered simultaneously by application of planar laser-induced fluorescence (PLIF) and particle tracking velocimetry (PTV), respectively. Our experiments are complemented by direct numerical simulations (DNSs) of liquid films simulated for different conditions so as to expand the parameter space of our investigation. Our statistical analysis builds upon a Reynolds-like decomposition of the time-varying flow rate that was presented in our previous research effort on falling films in [Charogiannis et al., Phys. Rev. Fluids 2, 014002 (2017), 10.1103/PhysRevFluids.2.014002], and which reveals that the dimensionless ratio of the unsteady term to the mean flow rate increases linearly with the product of the coefficients of variation of the film height and bulk velocity, as well as with the ratio of the Nusselt height to the mean film height, both at the same upstream PLIF/PTV measurement location. Based on relations that are derived to describe these results, a methodology for predicting the mass-transfer capability (through the mean and standard deviation of the bulk flow speed) of these flows is developed in terms of the mean and standard deviation of the film thickness and the mean flow rate, which are considerably easier to obtain experimentally than velocity profiles. The errors associated with these predictions are estimated at ≈1.5 % and 8% respectively in the experiments and at <1 % and <2 % respectively in the DNSs. Beyond the generation of these relations for the prediction of important film flow characteristics based on simple flow information, the data provided can be used to design improved heat- and mass-transfer equipment reactors or other process operation units which exploit film flows, but also to develop and validate multiphase flow models in other physical and technological settings.
Development of a two-level modular simulation tool for dysim
International Nuclear Information System (INIS)
Kofoed, J.E.
1987-07-01
A simulation tool to assist the user when constructing continuous simulation models is described. The simulation tool can be used for constructing simulation programmes that are executed with the rutime executive DYSIM86 which applied a modular approach. This approach makes it possible to split a model into several modules. The simulation tool introduces one more level of modularity. In this level a module is constructed from submodules taken from a library. A submodule consists of a submodel for a component in the complete model. The simulation tool consists of two precompilers working on the two different levels of modularity. The library is completely open to the user so that it is possible to extend it. This is done by a routine which is also part of the simulation tool. The simulation tool is demonstrated by simulating a part of a power plant and a part of a sugar factory. This illustrates that the precompilers can be used for simulating different types of process plants. 69 ill., 13 tabs., 41 refs. (author)
Fracturing of simulated high-level waste glass in canisters
International Nuclear Information System (INIS)
Peters, R.D.; Slate, S.C.
1981-09-01
Waste-glass castings generated from engineering-scale developmental processes at the Pacific Northwest Laboratory are generally found to have significant levels of cracks. The causes and extent of fracturing in full-scale canisters of waste glass as a result of cooling and accidental impact are discussed. Although the effects of cracking on waste-form performance in a repository are not well understood, cracks in waste forms can potentially increase leaching surface area. If cracks are minimized or absent in the waste-glass canisters, the potential for radionuclide release from the canister package can be reduced. Additional work on the effects of cracks on leaching of glass is needed. In addition to investigating the extent of fracturing of glass in waste-glass canisters, methods to reduce cracking by controlling cooling conditions were explored. Overall, the study shows that the extent of glass cracking in full-scale, passively-cooled, continuous melting-produced canisters is strongly dependent on the cooling rate. This observation agrees with results of previously reported Pacific Northwest Laboratory experiments on bench-scale annealed canisters. Thus, the cause of cracking is principally bulk thermal stresses. Fracture damage resulting from shearing at the glass/metal interface also contributes to cracking, more so in stainless steel canisters than in carbon steel canisters. This effect can be reduced or eliminated with a graphite coating applied to the inside of the canister. Thermal fracturing can be controlled by using a fixed amount of insulation for filling and cooling of canisters. In order to maintain production rates, a small amount of additional facility space is needed to accomodate slow-cooling canisters. Alternatively, faster cooling can be achieved using the multi-staged approach. Additional development is needed before this approach can be used on full-scale (60-cm) canisters
International Nuclear Information System (INIS)
Liu Huigen; Zhou Jilin; Wang Su
2011-01-01
During the late stage of planet formation, when Mars-sized cores appear, interactions among planetary cores can excite their orbital eccentricities, accelerate their merging, and thus sculpt their final orbital architecture. This study contributes to the final assembling of planetary systems with N-body simulations, including the type I or II migration of planets and gas accretion of massive cores in a viscous disk. Statistics on the final distributions of planetary masses, semimajor axes, and eccentricities are derived and are comparable to those of the observed systems. Our simulations predict some new orbital signatures of planetary systems around solar mass stars: 36% of the surviving planets are giant planets (>10 M + ). Most of the massive giant planets (>30 M + ) are located at 1-10 AU. Terrestrial planets are distributed more or less evenly at J in highly eccentric orbits (e > 0.3-0.4). The average eccentricity (∼0.15) of the giant planets (>10 M + ) is greater than that (∼0.05) of the terrestrial planets ( + ). A planetary system with more planets tends to have smaller planet masses and orbital eccentricities on average.
International Nuclear Information System (INIS)
Wu, Hao-Yi; Hahn, Oliver; Wechsler, Risa H.; Mao, Yao-Yuan; Behroozi, Peter S.
2013-01-01
We present the first results from the RHAPSODY cluster re-simulation project: a sample of 96 'zoom-in' simulations of dark matter halos of 10 14.8±0.05 h –1 M ☉ , selected from a 1 h –3 Gpc 3 volume. This simulation suite is the first to resolve this many halos with ∼5 × 10 6 particles per halo in the cluster mass regime, allowing us to statistically characterize the distribution of and correlation between halo properties at fixed mass. We focus on the properties of the main halos and how they are affected by formation history, which we track back to z = 12, over five decades in mass. We give particular attention to the impact of the formation history on the density profiles of the halos. We find that the deviations from the Navarro-Frenk-White (NFW) model and the Einasto model depend on formation time. Late-forming halos tend to have considerable deviations from both models, partly due to the presence of massive subhalos, while early-forming halos deviate less but still significantly from the NFW model and are better described by the Einasto model. We find that the halo shapes depend only moderately on formation time. Departure from spherical symmetry impacts the density profiles through the anisotropic distribution of massive subhalos. Further evidence of the impact of subhalos is provided by analyzing the phase-space structure. A detailed analysis of the properties of the subhalo population in RHAPSODY is presented in a companion paper.
Perkins, Porter J.; Lewis, William; Mulholland, Donald R.
1957-01-01
A statistical study is made of icing data reported from weather reconnaissance aircraft flown by Air Weather Service (USAF). The weather missions studied were flown at fixed flight levels of 500 millibars (18,000 ft) and 700 millibars (10,000 ft) over wide areas of the Pacific, Atlantic, and Arctic Oceans. This report is presented as part of a program conducted by the NACA to obtain extensive icing statistics relevant to aircraft design and operation. The thousands of in-flight observations recorded over a 2- to 4-year period provide reliable statistics on icing encounters for the specific areas, altitudes, and seasons included in the data. The relative frequencies of icing occurrence are presented, together with the estimated icing probabilities and the relation of these probabilities to the frequencies of flight in clouds and cloud temperatures. The results show that aircraft operators can expect icing probabilities to vary widely throughout the year from near zero in the cold Arctic areas in winter up to 7 percent in areas where greater cloudiness and warmer temperatures prevail. The data also reveal a general tendency of colder cloud temperatures to reduce the probability of icing in equally cloudy conditions.
Energy Technology Data Exchange (ETDEWEB)
Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA
2016-04-08
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance
Simulation Experiment Description Markup Language (SED-ML Level 1 Version 3 (L1V3
Directory of Open Access Journals (Sweden)
Bergmann Frank T.
2018-03-01
Full Text Available The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML is an XML-based format that encodes, for a given simulation experiment, (i which models to use; (ii which modifications to apply to models before simulation; (iii which simulation procedures to run on each model; (iv how to post-process the data; and (v how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1 implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
2018-03-19
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
International Nuclear Information System (INIS)
Thomas, J.M.; Eberhardt, L.L.; Skalski, J.R.; Simmons, M.A.
1984-05-01
As part of a larger study funded by the US Nuclear Regulatory Commission we have been investigating field sampling strategies and compositing as a means of detecting spills or migration at commercial low-level radioactive waste disposal sites. The overall project is designed to produce information for developing guidance on implementing 10 CFR part 61. Compositing (pooling samples) for detection is discussed first, followed by our development of a statistical test to allow a decision as to whether any component of a composite exceeds a prescribed maximum acceptable level. The question of optimal field sampling designs and an Apple computer program designed to show the difficulties in constructing efficient field designs and using compositing schemes are considered. 6 references, 3 figures, 3 tables
de Savigny, Don; Riley, Ian; Chandramohan, Daniel; Odhiambo, Frank; Nichols, Erin; Notzon, Sam; AbouZahr, Carla; Mitra, Raj; Cobos Muñoz, Daniel; Firth, Sonja; Maire, Nicolas; Sankoh, Osman; Bronson, Gay; Setel, Philip; Byass, Peter; Jakob, Robert; Boerma, Ties; Lopez, Alan D.
2017-01-01
ABSTRACT Background: Reliable and representative cause of death (COD) statistics are essential to inform public health policy, respond to emerging health needs, and document progress towards Sustainable Development Goals. However, less than one-third of deaths worldwide are assigned a cause. Civil registration and vital statistics (CRVS) systems in low- and lower-middle-income countries are failing to provide timely, complete and accurate vital statistics, and it will still be some time before they can provide physician-certified COD for every death. Proposals: Verbal autopsy (VA) is a method to ascertain the probable COD and, although imperfect, it is the best alternative in the absence of medical certification. There is extensive experience with VA in research settings but only a few examples of its use on a large scale. Data collection using electronic questionnaires on mobile devices and computer algorithms to analyse responses and estimate probable COD have increased the potential for VA to be routinely applied in CRVS systems. However, a number of CRVS and health system integration issues should be considered in planning, piloting and implementing a system-wide intervention such as VA. These include addressing the multiplicity of stakeholders and sub-systems involved, integration with existing CRVS work processes and information flows, linking VA results to civil registration records, information technology requirements and data quality assurance. Conclusions: Integrating VA within CRVS systems is not simply a technical undertaking. It will have profound system-wide effects that should be carefully considered when planning for an effective implementation. This paper identifies and discusses the major system-level issues and emerging practices, provides a planning checklist of system-level considerations and proposes an overview for how VA can be integrated into routine CRVS systems. PMID:28137194
Ross, Sheldon
2006-01-01
Ross's Simulation, Fourth Edition introduces aspiring and practicing actuaries, engineers, computer scientists and others to the practical aspects of constructing computerized simulation studies to analyze and interpret real phenomena. Readers learn to apply results of these analyses to problems in a wide variety of fields to obtain effective, accurate solutions and make predictions about future outcomes. This text explains how a computer can be used to generate random numbers, and how to use these random numbers to generate the behavior of a stochastic model over time. It presents the statist
Out-of-order parallel discrete event simulation for electronic system-level design
Chen, Weiwei
2014-01-01
This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin
Level-one modules library for DSNP: Dynamic Simulator for Nuclear Power-plants
International Nuclear Information System (INIS)
Saphier, D.
1978-09-01
The Dynamic Simulator for Nuclear Power-plants (DSNP) is a system of programs and data sets by which a nuclear power plant or part thereof can be simulated at different levels of sophistication. The acronym DSNP is used interchangeably for the DSNP language, for the DSNP precompiler, for the DSNP libraries, and for the DSNP document generator. The DSNP language is a set of simple block oriented statements, which together with the appropriate data, comprise a simulation of a nuclear power plant. The majority of the DSNP statements will result in the inclusion of a simulated physical module into the program. FORTRAN statements can be inserted with no restrictions among DSNP statements
Digitization and simulation realization of full range control system for steam generator water level
International Nuclear Information System (INIS)
Qian Hong; Ye Jianhua; Qian Fei; Li Chao
2010-01-01
In this paper, a full range digital control system for the steam generator water level is designed by a control scheme of single element control and three-element cascade feed-forward control, and the method to use the software module configuration is proposed to realize the water level control strategy. This control strategy is then applied in the operation of the nuclear power simulation machine. The simulation result curves indicate that the steam generator water level maintains constant at the stable operation condition, and when the load changes, the water level changes but finally maintains the constant. (authors)
Top-Level Simulation of a Smart-Bolometer Using VHDL Modeling
Directory of Open Access Journals (Sweden)
Matthieu DENOUAL
2012-03-01
Full Text Available An event-driven modeling technique in standard VHDL is presented in this paper for the high level simulation of a resistive bolometer operating in closed-loop mode and implementing smart functions. The closed-loop mode operation is achieved by the capacitively coupled electrical substitution technique. The event-driven VHDL modeling technique is successfully applied to behavioral modeling and simulation of such a multi-physics system involving optical, thermal and electronics mechanisms. The modeling technique allows the high level simulations for the development and validation of the smart functions algorithms of the future integrated smart-device.
A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure
International Nuclear Information System (INIS)
Liu Jizhi; Chen Xingbi
2009-01-01
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)
A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure
Energy Technology Data Exchange (ETDEWEB)
Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)
2009-12-15
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)
Directory of Open Access Journals (Sweden)
Dong-Hoon Jeong
2017-07-01
Full Text Available Naval ships are assigned many and varied missions. Their performance is critical for mission success, and depends on the specifications of the components. This is why performance analyses of naval ships are required at the initial design stage. Since the design and construction of naval ships take a very long time and incurs a huge cost, Modeling and Simulation (M & S is an effective method for performance analyses. Thus in this study, a simulation core is proposed to analyze the performance of naval ships considering their specifications. This simulation core can perform the engineering level of simulations, considering the mathematical models for naval ships, such as maneuvering equations and passive sonar equations. Also, the simulation models of the simulation core follow Discrete EVent system Specification (DEVS and Discrete Time System Specification (DTSS formalisms, so that simulations can progress over discrete events and discrete times. In addition, applying DEVS and DTSS formalisms makes the structure of simulation models flexible and reusable. To verify the applicability of this simulation core, such a simulation core was applied to simulations for the performance analyses of a submarine in an Anti-SUrface Warfare (ASUW mission. These simulations were composed of two scenarios. The first scenario of submarine diving carried out maneuvering performance analysis by analyzing the pitch angle variation and depth variation of the submarine over time. The second scenario of submarine detection carried out detection performance analysis by analyzing how well the sonar of the submarine resolves adjacent targets. The results of these simulations ensure that the simulation core of this study could be applied to the performance analyses of naval ships considering their specifications.
Energy Technology Data Exchange (ETDEWEB)
Diletti, G.; Scortichini, G.; Conte, A.; Migliorati, G.; Caporale, V. [Ist. Zooprofilattico Sperimentale dell' Abruzzo e del Molise (Italy)
2004-09-15
PCDD/Fs levels exceeding the European Union (EU) tolerance limit were detected in milk and animal feed samples collected in Campania region in the years 2001-2003, as reported in a previous paper1. The analyses were performed on milk samples from different animal species (cow, sheep, goat and buffalo) and on animal feed samples (silage, hay, grass, cereals, premixes and mixed feeds) permitting to assess the levels and the geographical extension of the contamination. The preliminary results of this survey had given clear indications of the dioxin contamination of feedingstuffs and their contribution to the high PCDD/Fs levels recorded in milk but a more detailed analysis was needed in order to confirm the previous observations. Aim of this work is the evaluation of the correlation between the PCDD/Fs levels and patterns found in milk and animal feed samples through the statistical analysis of the congeners profiles and concentrations. Moreover, the typical congeners profiles of milk samples taken in the area under investigation were compared to those obtained from samples collected in the framework of the National Residues Surveillance Plan (NRSP) in 2003. The contamination phenomenon was also studied by means of the spatial correlation analysis.
Mahmud, Mastura
2009-08-01
The large-scale vegetation fires instigated by the local farmers during the dry period of the major El Niño event in 1997 can be considered as one of the worst environmental disasters that have occurred in southeast Asia in recent history. This study investigated the local meteorology characteristics of an equatorial environment within a domain that includes the northwestern part of Borneo from the 17 to 27 September 1997 during the height of the haze episode by utilizing a limited area three-dimensional meteorological and dispersion model, The Air Pollution Model (TAPM). Daily land and sea breeze conditions near the northwestern coast of Borneo in the state of Sarawak, Malaysia were predicted with moderate success by the index of agreement of less than one between the observed and simulated values for wind speed and a slight overprediction of 2.3 of the skill indicator that evaluates the standard deviation to the observed values. The innermost domain of study comprises an area of 24,193 km2, from approximately 109°E to 111°E, and from 1°N to 2.3°N, which includes a part of the South China Sea. Tracer analysis of air particles that were sourced in the state of Sarawak on the island of Borneo verified the existence of the landward and shoreward movements of the air during the simulation of the low level wind field. Polluted air particles were transported seawards during night-time, and landwards during daytime, highlighting the recirculation features of aged and newer air particles during the length of eleven days throughout the model simulation. Near calm conditions at low levels were simulated by the trajectory analysis from midnight to mid-day on the 22 of September 1997. Low-level turbulence within the planetary boundary layer in terms of the total kinetic energy was weak, congruent with the weak strength of low level winds that reduced the ability of the air to transport the pollutants. Statistical evaluation showed that parameters such as the systematic
Identification and simulation for steam generator water level based on Kalman Filter
International Nuclear Information System (INIS)
Deng Chen; Zhang Qinshun
2008-01-01
In order to effectively control the water level of the steam generator (SG), this paper has set about the state-observer theory in modern control and put forward a method to detect the 'false water level' based on Kalman Filter. Kalman Filter is a efficient tool to estimate state-variable by measured value including noise. For heavy measurement noise of steam flow, constructing a 'false water level' observer by Kalman Filter could availably obtain state variable of 'false water level'. The simulation computing for the dynamics characteristic of nuclear SG water level process under several typically running power was implemented by employing the simulation model. The result shows that the simulation model accurately identifies the 'false water level' produced in the reverse thermal-dynamic effects of nuclear SG water level process. The simulation model can realize the precise analysis of dynamics characteristic for the nuclear SG water level process. It can provide a kind of new ideas for the 'false water level' detecting of SG. (authors)
A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips
Directory of Open Access Journals (Sweden)
Guanyi Sun
2011-01-01
Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.
Optimization of simulated moving bed (SMB) chromatography: a multi-level optimization procedure
DEFF Research Database (Denmark)
Jørgensen, Sten Bay; Lim, Young-il
2004-01-01
objective functions (productivity and desorbent consumption), employing the standing wave analysis, the true moving bed (TMB) model and the simulated moving bed (SMB) model. The procedure is constructed on a non-worse solution property advancing level by level and its solution does not mean a global optimum...
Simulation and Analysis of a Grid Connected Multi-level Converter Topologies and their Comparison
Directory of Open Access Journals (Sweden)
Mohammad Shadab Mirza
2014-09-01
Full Text Available This paper presents simulation and analysis of a grid connected multi-level converter topologies. In this paper, converter circuit works as an inverter by controlling the switching angle (α. This paper, presents a MATLAB/SIMULINK model of multi-level converter topologies (topology1 & topology2. Topology1 is without transformer while topology2 with transformer. Both the topologies are simulated and analyzed for three level converters in order to reduce the total harmonic distortion (THD. A comparative study of topology1 and topology2 is also presented in this paper for different switching angles (α and battery voltages. The results have been tabulated and discussed.
Ilyas, Asim; Shah, Munir H
2017-05-12
The present study was designed to investigate the role of selected essential and toxic metals in the onset/prognosis of valvular heart disease (VHD). Nitric acid-perchloric acid based wet digestion procedure was used for the quantification of the metals by flame atomic absorption spectrophotometry. Comparative appraisal of the data revealed that average levels of Cd, Co, Cr, Fe, K, Li, Mn and Zn were significantly higher in blood of VHD patients, while the average concentration of Ca was found at elevated level in controls (P < 0.05). However, Cu, Mg, Na, Sr and Pb depicted almost comparable levels in the blood of both donor groups. The correlation study revealed significantly different mutual associations among the metals in the blood of VHD patients compared with the controls. Multivariate statistical methods showed substantially divergent grouping of the metals for the patients and controls. Some significant differences in the metal concentrations were also observed with gender, abode, dietary/smoking habits and occupations of both donor groups. Overall, the study demonstrated that disproportions in the concentrations of essential/toxic metals in the blood are involved in pathogenesis of the disease.
Yura, Harold T; Fields, Renny A
2011-06-20
Level crossing statistics is applied to the complex problem of atmospheric turbulence-induced beam wander for laser propagation from ground to space. A comprehensive estimate of the single-axis wander angle temporal autocorrelation function and the corresponding power spectrum is used to develop, for the first time to our knowledge, analytic expressions for the mean angular level crossing rate and the mean duration of such crossings. These results are based on an extension and generalization of a previous seminal analysis of the beam wander variance by Klyatskin and Kon. In the geometrical optics limit, we obtain an expression for the beam wander variance that is valid for both an arbitrarily shaped initial beam profile and transmitting aperture. It is shown that beam wander can disrupt bidirectional ground-to-space laser communication systems whose small apertures do not require adaptive optics to deliver uniform beams at their intended target receivers in space. The magnitude and rate of beam wander is estimated for turbulence profiles enveloping some practical laser communication deployment options and suggesting what level of beam wander effects must be mitigated to demonstrate effective bidirectional laser communication systems.
Taylor, Kirsten I; Devereux, Barry J; Acres, Kadia; Randall, Billi; Tyler, Lorraine K
2012-03-01
Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. Copyright © 2011 Elsevier B.V. All rights reserved.
Goderniaux, Pascal; Brouyère, Serge; Blenkinsop, Stephen; Burton, Aidan; Fowler, Hayley; Dassargues, Alain
2010-05-01
applied not only to the mean of climatic variables, but also across the statistical distributions of these variables. This is important as these distributions are expected to change in the future, with more extreme rainfall events, separated by longer dry periods. (2) The novel approach used in this study can simulate transient climate change from 2010 to 2085, rather than time series representative of a stationary climate for the period 2071-2100. (3) The weather generator is used to generate a large number of equiprobable climate change scenarios for each RCM, representative of the natural variability of the weather. All of these scenarios are applied as input to the Geer basin model to assess the projected impact of climate change on groundwater levels, the uncertainty arising for different RCM projections and the uncertainty linked to natural climatic variability. Using the output results from all scenarios, 95% confidence intervals are calculated for each year and month between 2010 and 2085. The climate change scenarios for the Geer basin model predict hotter and drier summers and warmer and wetter winters. Considering the results of this study, it is very likely that groundwater levels and surface flow rates in the Geer basin will decrease by the end of the century. This is of concern because it also means that groundwater quantities available for abstraction will also decrease. However, this study also shows that the uncertainty of these projections is relatively large compared to the projected changes so that it remains difficult to confidently determine the magnitude of the decrease. The use and combination of an integrated surface - subsurface model and stochastic climate change scenarios has never been used in previous climate change impact studies on groundwater resources. It constitutes an innovation and is an important tool for helping water managers to take decisions.
Treur, M.; Postma, M.
2014-01-01
Objectives: Patient-level simulation models provide increased flexibility to overcome the limitations of cohort-based approaches in health-economic analysis. However, computational requirements of reaching convergence is a notorious barrier. The objective was to assess the impact of using
Sadovskii, Michael V
2012-01-01
This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.
Directory of Open Access Journals (Sweden)
NIMARA, S.
2016-02-01
Full Text Available This paper proposes data-dependent reliability evaluation methodology for digital systems described at Register Transfer Level (RTL. It uses a hybrid hierarchical approach, combining the accuracy provided by Gate Level (GL Simulated Fault Injection (SFI and the low simulation overhead required by RTL fault injection. The methodology comprises the following steps: the correct simulation of the RTL system, according to a set of input vectors, hierarchical decomposition of the system into basic RTL blocks, logic synthesis of basic RTL blocks, data-dependent SFI for the GL netlists, and RTL SFI. The proposed methodology has been validated in terms of accuracy on a medium sized circuit – the parallel comparator used in Check Node Unit (CNU of the Low-Density Parity-Check (LDPC decoders. The methodology has been applied for the reliability analysis of a 128-bit Advanced Encryption Standard (AES crypto-core, for which the GL simulation was prohibitive in terms of required computational resources.
Efficient Uplink Modeling for Dynamic System-Level Simulations of Cellular and Mobile Networks
Directory of Open Access Journals (Sweden)
Lobinger Andreas
2010-01-01
Full Text Available A novel theoretical framework for uplink simulations is proposed. It allows investigations which have to cover a very long (real- time and which at the same time require a certain level of accuracy in terms of radio resource management, quality of service, and mobility. This is of particular importance for simulations of self-organizing networks. For this purpose, conventional system level simulators are not suitable due to slow simulation speeds far beyond real-time. Simpler, snapshot-based tools are lacking the aforementioned accuracy. The runtime improvements are achieved by deriving abstract theoretical models for the MAC layer behavior. The focus in this work is long term evolution, and the most important uplink effects such as fluctuating interference, power control, power limitation, adaptive transmission bandwidth, and control channel limitations are considered. Limitations of the abstract models will be discussed as well. Exemplary results are given at the end to demonstrate the capability of the derived framework.
International Nuclear Information System (INIS)
Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.
2003-01-01
Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any
Enhanced Discrete-Time Scheduler Engine for MBMS E-UMTS System Level Simulator
DEFF Research Database (Denmark)
Pratas, Nuno; Rodrigues, António
2007-01-01
In this paper the design of an E-UMTS system level simulator developed for the study of optimization methods for the MBMS is presented. The simulator uses a discrete event based philosophy, which captures the dynamic behavior of the Radio Network System. This dynamic behavior includes the user...... mobility, radio interfaces and the Radio Access Network. Its given emphasis on the enhancements developed for the simulator core, the Event Scheduler Engine. Two implementations for the Event Scheduler Engine are proposed, one optimized for single core processors and other for multi-core ones....
Landsman, V; Lou, W Y W; Graubard, B I
2015-05-20
We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Yu, Chao; Di Girolamo, Larry; Chen, Liangfu; Zhang, Xueying; Liu, Yang
2015-01-01
The spatial and temporal characteristics of fine particulate matter (PM2.5, particulate matter research has been conducted on the association between cloud properties and PM2.5 levels. In this study, we analyzed the relationships between ground PM2.5 concentrations and two satellite-retrieved cloud parameters using data from the Southeastern Aerosol Research and Characterization (SEARCH) Network during 2000-2010. We found that both satellite-retrieved cloud fraction (CF) and cloud optical thickness (COT) are negatively associated with PM2.5 levels. PM2.5 speciation and meteorological analysis suggested that the main reason for these negative relationships might be the decreased secondary particle generation. Stratified analyses by season, land use type, and site location showed that seasonal impacts on this relationship are significant. These associations do not vary substantially between urban and rural sites or inland and coastal sites. The statistically significant negative associations of PM2.5 mass concentrations with CF and COT suggest that satellite-retrieved cloud parameters have the potential to serve as predictors to fill the data gap left by satellite aerosol optical depth in satellite-driven PM2.5 models.
Directory of Open Access Journals (Sweden)
Turnbull Arran K
2012-08-01
Full Text Available Abstract Background Affymetrix GeneChips and Illumina BeadArrays are the most widely used commercial single channel gene expression microarrays. Public data repositories are an extremely valuable resource, providing array-derived gene expression measurements from many thousands of experiments. Unfortunately many of these studies are underpowered and it is desirable to improve power by combining data from more than one study; we sought to determine whether platform-specific bias precludes direct integration of probe intensity signals for combined reanalysis. Results Using Affymetrix and Illumina data from the microarray quality control project, from our own clinical samples, and from additional publicly available datasets we evaluated several approaches to directly integrate intensity level expression data from the two platforms. After mapping probe sequences to Ensembl genes we demonstrate that, ComBat and cross platform normalisation (XPN, significantly outperform mean-centering and distance-weighted discrimination (DWD in terms of minimising inter-platform variance. In particular we observed that DWD, a popular method used in a number of previous studies, removed systematic bias at the expense of genuine biological variability, potentially reducing legitimate biological differences from integrated datasets. Conclusion Normalised and batch-corrected intensity-level data from Affymetrix and Illumina microarrays can be directly combined to generate biologically meaningful results with improved statistical power for robust, integrated reanalysis.
Predicting Statistical Distributions of Footbridge Vibrations
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2009-01-01
The paper considers vibration response of footbridges to pedestrian loading. Employing Newmark and Monte Carlo simulation methods, a statistical distribution of bridge vibration levels is calculated modelling walking parameters such as step frequency and stride length as random variables...
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Yan, Koon-Kiu; Gerstein, Mark
2011-01-01
The presence of web-based communities is a distinctive signature of Web 2.0. The web-based feature means that information propagation within each community is highly facilitated, promoting complex collective dynamics in view of information exchange. In this work, we focus on a community of scientists and study, in particular, how the awareness of a scientific paper is spread. Our work is based on the web usage statistics obtained from the PLoS Article Level Metrics dataset compiled by PLoS. The cumulative number of HTML views was found to follow a long tail distribution which is reasonably well-fitted by a lognormal one. We modeled the diffusion of information by a random multiplicative process, and thus extracted the rates of information spread at different stages after the publication of a paper. We found that the spread of information displays two distinct decay regimes: a rapid downfall in the first month after publication, and a gradual power law decay afterwards. We identified these two regimes with two distinct driving processes: a short-term behavior driven by the fame of a paper, and a long-term behavior consistent with citation statistics. The patterns of information spread were found to be remarkably similar in data from different journals, but there are intrinsic differences for different types of web usage (HTML views and PDF downloads versus XML). These similarities and differences shed light on the theoretical understanding of different complex systems, as well as a better design of the corresponding web applications that is of high potential marketing impact.
Directory of Open Access Journals (Sweden)
Koon-Kiu Yan
Full Text Available The presence of web-based communities is a distinctive signature of Web 2.0. The web-based feature means that information propagation within each community is highly facilitated, promoting complex collective dynamics in view of information exchange. In this work, we focus on a community of scientists and study, in particular, how the awareness of a scientific paper is spread. Our work is based on the web usage statistics obtained from the PLoS Article Level Metrics dataset compiled by PLoS. The cumulative number of HTML views was found to follow a long tail distribution which is reasonably well-fitted by a lognormal one. We modeled the diffusion of information by a random multiplicative process, and thus extracted the rates of information spread at different stages after the publication of a paper. We found that the spread of information displays two distinct decay regimes: a rapid downfall in the first month after publication, and a gradual power law decay afterwards. We identified these two regimes with two distinct driving processes: a short-term behavior driven by the fame of a paper, and a long-term behavior consistent with citation statistics. The patterns of information spread were found to be remarkably similar in data from different journals, but there are intrinsic differences for different types of web usage (HTML views and PDF downloads versus XML. These similarities and differences shed light on the theoretical understanding of different complex systems, as well as a better design of the corresponding web applications that is of high potential marketing impact.
Thompson, John
2015-04-01
As the Physical Review Focused Collection demonstrates, recent frontiers in physics education research include systematic investigations at the upper division. As part of a collaborative project, we have examined student understanding of several topics in upper-division thermal and statistical physics. A fruitful context for research is the Boltzmann factor in statistical mechanics: the standard derivation involves several physically justified mathematical steps as well as the invocation of a Taylor series expansion. We have investigated student understanding of the physical significance of the Boltzmann factor as well as its utility in various circumstances, and identified various lines of student reasoning related to the use of the Boltzmann factor. Results from written data as well as teaching interviews suggest that many students do not use the Boltzmann factor when answering questions related to probability in applicable physical situations, even after lecture instruction. We designed an inquiry-based tutorial activity to guide students through a derivation of the Boltzmann factor and to encourage deep connections between the physical quantities involved and the mathematics. Observations of students working through the tutorial suggest that many students at this level can recognize and interpret Taylor series expansions, but they often lack fluency in creating and using Taylor series appropriately, despite previous exposure in both calculus and physics courses. Our findings also suggest that tutorial participation not only increases the prevalence of relevant invocation of the Boltzmann factor, but also helps students gain an appreciation of the physical implications and meaning of the mathematical formalism behind the formula. Supported in part by NSF Grants DUE-0817282, DUE-0837214, and DUE-1323426.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-09-04
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...
NASA System-Level Design, Analysis and Simulation Tools Research on NextGen
Bardina, Jorge
2011-01-01
A review of the research accomplished in 2009 in the System-Level Design, Analysis and Simulation Tools (SLDAST) of the NASA's Airspace Systems Program is presented. This research thrust focuses on the integrated system-level assessment of component level innovations, concepts and technologies of the Next Generation Air Traffic System (NextGen) under research in the ASP program to enable the development of revolutionary improvements and modernization of the National Airspace System. The review includes the accomplishments on baseline research and the advancements on design studies and system-level assessment, including the cluster analysis as an annualization standard of the air traffic in the U.S. National Airspace, and the ACES-Air MIDAS integration for human-in-the-loop analyzes within the NAS air traffic simulation.
Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.
2017-02-01
Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.
Abbout, Adel
2016-08-05
Using the tools of random matrix theory we develop a statistical analysis of the transport properties of thermoelectric low-dimensional systems made of two electron reservoirs set at different temperatures and chemical potentials, and connected through a low-density-of-states two-level quantum dot that acts as a conducting chaotic cavity. Our exact treatment of the chaotic behavior in such devices lies on the scattering matrix formalism and yields analytical expressions for the joint probability distribution functions of the Seebeck coefficient and the transmission profile, as well as the marginal distributions, at arbitrary Fermi energy. The scattering matrices belong to circular ensembles which we sample to numerically compute the transmission function, the Seebeck coefficient, and their relationship. The exact transport coefficients probability distributions are found to be highly non-Gaussian for small numbers of conduction modes, and the analytical and numerical results are in excellent agreement. The system performance is also studied, and we find that the optimum performance is obtained for half-transparent quantum dots; further, this optimum may be enhanced for systems with few conduction modes.
Abbout, Adel; Ouerdane, Henni; Goupil, Christophe
2016-01-01
Using the tools of random matrix theory we develop a statistical analysis of the transport properties of thermoelectric low-dimensional systems made of two electron reservoirs set at different temperatures and chemical potentials, and connected through a low-density-of-states two-level quantum dot that acts as a conducting chaotic cavity. Our exact treatment of the chaotic behavior in such devices lies on the scattering matrix formalism and yields analytical expressions for the joint probability distribution functions of the Seebeck coefficient and the transmission profile, as well as the marginal distributions, at arbitrary Fermi energy. The scattering matrices belong to circular ensembles which we sample to numerically compute the transmission function, the Seebeck coefficient, and their relationship. The exact transport coefficients probability distributions are found to be highly non-Gaussian for small numbers of conduction modes, and the analytical and numerical results are in excellent agreement. The system performance is also studied, and we find that the optimum performance is obtained for half-transparent quantum dots; further, this optimum may be enhanced for systems with few conduction modes.
DEFF Research Database (Denmark)
Breinholt, Anders; Møller, Jan Kloppenborg; Madsen, Henrik
2012-01-01
While there seems to be consensus that hydrological model outputs should be accompanied with an uncertainty estimate the appropriate method for uncertainty estimation is not agreed upon and a debate is ongoing between advocators of formal statistical methods who consider errors as stochastic...... and GLUE advocators who consider errors as epistemic, arguing that the basis of formal statistical approaches that requires the residuals to be stationary and conform to a statistical distribution is unrealistic. In this paper we take a formal frequentist approach to parameter estimation and uncertainty...... necessary but the statistical assumptions were nevertheless not 100% justified. The residual analysis showed that significant autocorrelation was present for all simulation models. We believe users of formal approaches to uncertainty evaluation within hydrology and within environmental modelling in general...
Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.
2005-09-01
Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.
Directory of Open Access Journals (Sweden)
Danny Scipión
2009-05-01
Full Text Available The daytime convective boundary layer (CBL is characterized by strong turbulence that is primarily forced by buoyancy transport from the heated underlying surface. The present study focuses on an example of flow structure of the CBL as observed in the U.S. Great Plains on June 8, 2007. The considered CBL flow has been reproduced using a numerical large eddy simulation (LES, sampled with an LES-based virtual boundary layer radar (BLR, and probed with an actual operational radar profiler. The LES-generated CBL flow data are then ingested by the virtual BLR and treated as a proxy for prevailing atmospheric conditions. The mean flow and turbulence parameters retrieved via each technique (actual radar profiler, virtual BLR, and LES have been cross-analyzed and reasonable agreement was found between the CBL wind parameters obtained from the LES and those measured by the actual radar. Averaged vertical velocity variance estimates from the virtual and actual BLRs were compared with estimates calculated from the LES for different periods of time. There is good agreement in the estimates from all three sources. Also, values of the vertical velocity skewness retrieved by all three techniques have been inter-compared as a function of height for different stages of the CBL evolution, showing fair agreement with each other. All three retrievals contain positively skewed vertical velocity structure throughout the main portion of the CBL. Radar estimates of the turbulence kinetic energy (eddy dissipation rate (ε have been obtained based on the Doppler spectral width of the returned signal for the vertical radar beam. The radar estimates were averaged over time in the same fashion as the LES output data. The agreement between estimates was generally good, especially within the mixing layer. Discrepancies observed above the inversion layer may be explained by a weak turbulence signal in particular flow configurations. The virtual BLR produces voltage
Differentiating levels of surgical experience on a virtual reality temporal bone simulator.
Zhao, Yi C; Kennedy, Gregor; Hall, Richard; O'Leary, Stephen
2010-11-01
Virtual reality simulation is increasingly being incorporated into surgical training and may have a role in temporal bone surgical education. Here we test whether metrics generated by a virtual reality surgical simulation can differentiate between three levels of experience, namely novices, otolaryngology residents, and experienced qualified surgeons. Cohort study. Royal Victorian Eye and Ear Hospital. Twenty-seven participants were recruited. There were 12 experts, six residents, and nine novices. After orientation, participants were asked to perform a modified radical mastoidectomy on the simulator. Comparisons of time taken, injury to structures, and forces exerted were made between the groups to determine which specific metrics would discriminate experience levels. Experts completed the simulated task in significantly shorter time than the other two groups (experts 22 minutes, residents 36 minutes, and novices 46 minutes; P = 0.001). Novices exerted significantly higher average forces when dissecting close to vital structures compared with experts (0.24 Newton [N] vs 0.13 N, P = 0.002). Novices were also more likely to injure structures such as dura compared to experts (23 injuries vs 3 injuries, P = 0.001). Compared with residents, the experts modulated their force between initial cortex dissection and dissection close to vital structures. Using the combination of these metrics, we were able to correctly classify the participants' level of experience 90 percent of the time. This preliminary study shows that measurements of performance obtained from within a virtual reality simulator can differentiate between levels of users' experience. These results suggest that simulator training may have a role in temporal bone training beyond foundational training. Copyright © 2010 American Academy of Otolaryngology–Head and Neck Surgery Foundation. Published by Mosby, Inc. All rights reserved.
Simulation of aerosol flow interaction with a solid body on molecular level
Amelyushkin, Ivan A.; Stasenko, Albert L.
2018-05-01
Physico-mathematical models and numerical algorithm of two-phase flow interaction with a solid body are developed. Results of droplet motion and its impingement upon a rough surface in real gas boundary layer simulation on the molecular level obtained via molecular dynamics technique are presented.
Laboratory simulation of high-level liquid waste evaporation and storage
International Nuclear Information System (INIS)
Anderson, P.A.
1978-01-01
The reprocessing of nuclear fuel generates high-level liquid wastes (HLLW) which require interim storage pending solidification. Interim storage facilities are most efficient if the HLLW is evaporated prior to or during the storage period. Laboratory evaporation and storage studies with simulated waste slurries have yielded data which are applicable to the efficient design and economical operation of actual process equipment
A thick level set interface model for simulating fatigue-drive delamination in composites
Latifi, M.; Van der Meer, F.P.; Sluys, L.J.
2015-01-01
This paper presents a new damage model for simulating fatigue-driven delamination in composite laminates. This model is developed based on the Thick Level Set approach (TLS) and provides a favorable link between damage mechanics and fracture mechanics through the non-local evaluation of the energy
Investigation of Pr I lines by a simulation of their hyperfine patterns: discovery of new levels
International Nuclear Information System (INIS)
Uddin, Zaheer; Siddiqui, Imran; Shamim, Khan; Windholz, L; Zafar, Roohi; Sikander, Rubeka
2012-01-01
Hyperfine structure (hf) patterns of unclassified spectral lines of the praseodymium atom, as appear in a high-resolution Fourier transform spectrum, have been simulated. In this way, the J-values and hf constants of the levels involved in the transitions were determined. Assuming that so far only one unknown level is participating in the transition, these constants were used to identify the known level. The second unknown level was found by performing subtraction or addition of the wave number of the transition to the wave number of the known level. The existence of the new level was then checked by explaining other unclassified lines with respect to the wave number and the hf pattern. In this way, 19 new levels of the praseodymium atom were discovered and are presented in this paper. In some cases, the accuracy of the hf constants was improved by laser-induced fluorescence spectroscopy.
A simple mass-conserved level set method for simulation of multiphase flows
Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.
2018-04-01
In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.
Energy Level Statistics of SO(5) Limit of Super-symmetry U(6/4) in Interacting Boson-Fermion Model
International Nuclear Information System (INIS)
Bai Hongbo; Zhang Jinfu; Zhou Xianrong
2005-01-01
We study the energy level statistics of the SO(5) limit of super-symmetry U(6/4) in odd-A nucleus using the interacting boson-fermion model. The nearest neighbor spacing distribution (NSD) and the spectral rigidity (Δ 3 ) are investigated, and the factors that affect the properties of level statistics are also discussed. The results show that the boson number N is a dominant factor. If N is small, both the interaction strengths of subgroups SO B (5) and SO BF (5) and the spin play important roles in the energy level statistics, however, along with the increase of N, the statistics distribution would tend to be in Poisson form.
Schmind, Kendra K.; Blankenship, Erin E.; Kerby. April T.; Green, Jennifer L.; Smith, Wendy M.
2014-01-01
The statistical preparation of in-service teachers, particularly middle school teachers, has been an area of concern for several years. This paper discusses the creation and delivery of an introductory statistics course as part of a master's degree program for in-service mathematics teachers. The initial course development took place before the…
Hassad, Rossi A.
2009-01-01
This study examined the teaching practices of 227 college instructors of introductory statistics (from the health and behavioral sciences). Using primarily multidimensional scaling (MDS) techniques, a two-dimensional, 10-item teaching practice scale, TISS (Teaching of Introductory Statistics Scale), was developed and validated. The two dimensions…
Prinos, Scott T.; Dixon, Joann F.
2016-02-25
Statistical analyses and maps representing mean, high, and low water-level conditions in the surface water and groundwater of Miami-Dade County were made by the U.S. Geological Survey, in cooperation with the Miami-Dade County Department of Regulatory and Economic Resources, to help inform decisions necessary for urban planning and development. Sixteen maps were created that show contours of (1) the mean of daily water levels at each site during October and May for the 2000–2009 water years; (2) the 25th, 50th, and 75th percentiles of the daily water levels at each site during October and May and for all months during 2000–2009; and (3) the differences between mean October and May water levels, as well as the differences in the percentiles of water levels for all months, between 1990–1999 and 2000–2009. The 80th, 90th, and 96th percentiles of the annual maximums of daily groundwater levels during 1974–2009 (a 35-year period) were computed to provide an indication of unusually high groundwater-level conditions. These maps and statistics provide a generalized understanding of the variations of water levels in the aquifer, rather than a survey of concurrent water levels. Water-level measurements from 473 sites in Miami-Dade County and surrounding counties were analyzed to generate statistical analyses. The monitored water levels included surface-water levels in canals and wetland areas and groundwater levels in the Biscayne aquifer.
International Nuclear Information System (INIS)
Yang, F; Byrd, D; Bowen, S; Kinahan, P; Sandison, G
2015-01-01
Purpose: Texture metrics extracted from oncologic PET have been investigated with respect to their usefulness as definitive indicants for prognosis in a variety of cancer. Metric calculation is often based on cubic voxels. Most commonly used PET scanners, however, produce rectangular voxels, which may change texture metrics. The objective of this study was to examine the variability of PET texture feature metrics resulting from voxel anisotropy. Methods: Sinograms of NEMA NU-2 phantom for 18F-FDG were simulated using the ASIM simulation tool. The obtained projection data was reconstructed (3D-OSEM) on grids of cubic and rectangular voxels, producing PET images of resolution of 2.73x2.73x3.27mm3 and 3.27x3.27x3.27mm3, respectively. An interpolated dataset obtained from resampling the rectangular voxel data for isotropic voxel dimension (3.27mm) was also considered. For each image dataset, 28 texture parameters based on grey-level co-occurrence matrices (GLCOM), intensity histograms (GLIH), neighborhood difference matrices (GLNDM), and zone size matrices (GLZSM) were evaluated within lesions of diameter of 33, 28, 22, and 17mm. Results: In reference to the isotopic image data, texture features appearing on the rectangular voxel data varied with a range of -34-10% for GLCOM based, -31-39% for GLIH based, -80 -161% for GLNDM based, and −6–45% for GLZSM based while varied with a range of -35-23% for GLCOM based, -27-35% for GLIH based, -65-86% for GLNDM based, and -22 -18% for GLZSM based for the interpolated image data. For the anisotropic data, GLNDM-cplx exhibited the largest extent of variation (161%) while GLZSM-zp showed the least (<1%). As to the interpolated data, GLNDM-busy varied the most (86%) while GLIH-engy varied the least (<1%). Conclusion: Variability of texture appearance on oncologic PET with respect to voxel representation is substantial and feature-dependent. It necessitates consideration of standardized voxel representation for inter
Multi-level Simulation of a Real Time Vibration Monitoring System Component
Robertson, Bryan A.; Wilkerson, Delisa
2005-01-01
This paper describes the development of a custom built Digital Signal Processing (DSP) printed circuit board designed to implement the Advanced Real Time Vibration Monitoring Subsystem proposed by Marshall Space Flight Center (MSFC) Transportation Directorate in 2000 for the Space Shuttle Main Engine Advanced Health Management System (AHMS). This Real Time Vibration Monitoring System (RTVMS) is being developed for ground use as part of the AHMS Health Management Computer-Integrated Rack Assembly (HMC-IRA). The HMC-IRA RTVMS design contains five DSPs which are highly interconnected through individual communication ports, shared memory, and a unique communication router that allows all the DSPs to receive digitized data fiom two multi-channel analog boards simultaneously. This paper will briefly cover the overall board design but will focus primarily on the state-of-the-art simulation environment within which this board was developed. This 16-layer board with over 1800 components and an additional mezzanine card has been an extremely challenging design. Utilization of a Mentor Graphics simulation environment provided the unique board and system level simulation capability to ascertain any timing or functional concerns before production. By combining VHDL, Synopsys Software and Hardware Models, and the Mentor Design Capture Environment, multiple simulations were developed to verify the RTVMS design. This multi-level simulation allowed the designers to achieve complete operability without error the first time the RTVMS printed circuit board was powered. The HMC-IRA design has completed all engineering and deliverable unit testing. P
Numerical simulations on self-leveling behaviors with cylindrical debris bed
Energy Technology Data Exchange (ETDEWEB)
Guo, Liancheng, E-mail: Liancheng.guo@kit.edu [Institute for Nuclear and Energy Technologies (IKET), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Morita, Koji, E-mail: morita@nucl.kyushu-u.ac.jp [Faculty of Engineering, Kyushu University, 2-3-7, 744 Motooka, Nishi-ku, Fukuoka 819-0395 (Japan); Tobita, Yoshiharu, E-mail: tobita.yoshiharu@jaea.go.jp [Fast Reactor Safety Technology Development Department, Japan Atomic Energy Agency, 4002 Narita, O-arai, Ibaraki 311-1393 (Japan)
2017-04-15
Highlights: • A 3D coupled method was developed by combining DEM with the multi-fluid model of SIMMER-IV code. • The method was validated by performing numerical simulations on a series of experiments with cylindrical particle bed. • Reasonable agreement can demonstrate the applicability of the method in reproducing the self-leveling behavior. • Sensitivity analysis on some model parameters was performed to assess their impacts. - Abstract: The postulated core disruptive accidents (CDAs) are regarded as particular difficulties in the safety analysis of liquid-metal fast reactors (LMFRs). In the CDAs, core debris may settle on the core-support structure and form conic bed mounds. Then debris bed can be levelled by the heat convection and vaporization of surrounding coolant sodium, which is named “self-leveling behavior”. The self-leveling behavior is a crucial issue in the safety analysis, due to its significant effect on the relocation of molten core and heat-removal capability of the debris bed. Considering its complicate multiphase mechanism, a comprehensive computational tool is needed to reasonably simulate transient particle behavior as well as thermal-hydraulic phenomenon of surrounding fluid phases. The SIMMER program is a successful computer code initially developed as an advanced tool for CDA analysis of LMFRs. It is a multi-velocity-field, multiphase, multicomponent, Eulerian, fluid dynamics code coupled with a fuel-pin model and a space- and energy-dependent neutron kinetics model. Until now, the code has been successfully applied in numerical simulations for reproducing key thermal-hydraulic phenomena involved in CDAs as well as performing reactor safety assessment. However, strong interactions between massive solid particles as well as particle characteristics in multiphase flows were not taken into consideration in its fluid-dynamics models. To solve this problem, a new method is developed by combining the discrete element method (DEM
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
Directory of Open Access Journals (Sweden)
Rebecca Ozelie
2016-10-01
Full Text Available Simulation experiences provide experiential learning opportunities during artificially produced real-life medical situations in a safe environment. Evidence supports using simulation in health care education yet limited quantitative evidence exists in occupational therapy. This study aimed to evaluate the differences in scores on the AOTA Fieldwork Performance Evaluation for the Occupational Therapy Student of Level II occupational therapy students who received high-fidelity simulation training and students who did not. A retrospective analysis of 180 students from a private university was used. Independent samples nonparametric t tests examined mean differences between Fieldwork Performance Evaluation scores of those who did and did not receive simulation experiences in the curriculum. Mean ranks were also analyzed for subsection scores and practice settings. Results of this study found no significant difference in overall Fieldwork Performance Evaluation scores between the two groups. The students who completed simulation and had fieldwork in inpatient rehabilitation had the greatest increase in mean rank scores and increases in several subsections. The outcome measure used in this study was found to have limited discriminatory capability and may have affected the results; however, this study finds that using simulation may be a beneficial supplement to didactic coursework in occupational therapy curriculums.
Efficient Simulation Modeling of an Integrated High-Level-Waste Processing Complex
International Nuclear Information System (INIS)
Gregory, Michael V.; Paul, Pran K.
2000-01-01
An integrated computational tool named the Production Planning Model (ProdMod) has been developed to simulate the operation of the entire high-level-waste complex (HLW) at the Savannah River Site (SRS) over its full life cycle. ProdMod is used to guide SRS management in operating the waste complex in an economically efficient and environmentally sound manner. SRS HLW operations are modeled using coupled algebraic equations. The dynamic nature of plant processes is modeled in the form of a linear construct in which the time dependence is implicit. Batch processes are modeled in discrete event-space, while continuous processes are modeled in time-space. The ProdMod methodology maps between event-space and time-space such that the inherent mathematical discontinuities in batch process simulation are avoided without sacrificing any of the necessary detail in the batch recipe steps. Modeling the processes separately in event- and time-space using linear constructs, and then coupling the two spaces, has accelerated the speed of simulation compared to a typical dynamic simulation. The ProdMod simulator models have been validated against operating data and other computer codes. Case studies have demonstrated the usefulness of the ProdMod simulator in developing strategies that demonstrate significant cost savings in operating the SRS HLW complex and in verifying the feasibility of newly proposed processes
High performance cellular level agent-based simulation with FLAME for the GPU.
Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela
2010-05-01
Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.
DEFF Research Database (Denmark)
Niu, H.; Wang, H.; Ye, X.
2017-01-01
application. A converter-level finite element simulation (FEM) simulation is carried out to obtain the ambient temperature of electrolytic capacitors and power MOSFETs used in the LED driver, which takes into account the impact of the driver enclosure and the thermal coupling among different components....... Therefore, the proposed method bridges the link between the global ambient temperature profile outside of the enclosure and the local ambient temperature profiles of the components of interest inside the driver. A quantitative comparison of the estimated annual lifetime consumptions of MOSFETs...
Characterization and simulation of the response of Multi-Pixel Photon Counters to low light levels
Energy Technology Data Exchange (ETDEWEB)
Vacheret, A. [Department of Physics, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Barker, G.J. [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom); Dziewiecki, M. [Institute of Radioelectronics, Warsaw University of Technology, 15/19 Nowowiejska St., 00-665 Warsaw (Poland); Guzowski, P. [Department of Physics, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Haigh, M.D. [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom); Hartfiel, B. [Department of Physics and Astronomy, Louisiana State University, 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803 (United States); Izmaylov, A. [Institute for Nuclear Research RAS, 60 October Revolution Pr. 7A, 117312 Moscow (Russian Federation); Johnston, W. [Department of Physics, Colorado State University, Fort Collins, CO 80523 (United States); Khabibullin, M.; Khotjantsev, A.; Kudenko, Yu. [Institute for Nuclear Research RAS, 60 October Revolution Pr. 7A, 117312 Moscow (Russian Federation); Kurjata, R. [Institute of Radioelectronics, Warsaw University of Technology, 15/19 Nowowiejska St., 00-665 Warsaw (Poland); Kutter, T. [Department of Physics and Astronomy, Louisiana State University, 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803 (United States); Lindner, T. [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, Canada, BC V6T 1Z1 (Canada); Masliah, P. [Department of Physics, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Marzec, J. [Institute of Radioelectronics, Warsaw University of Technology, 15/19 Nowowiejska St., 00-665 Warsaw (Poland); Mineev, O.; Musienko, Yu. [Institute for Nuclear Research RAS, 60 October Revolution Pr. 7A, 117312 Moscow (Russian Federation); and others
2011-11-11
The calorimeter, range detector and active target elements of the T2K near detectors rely on the Hamamatsu Photonics Multi-Pixel Photon Counters (MPPCs) to detect scintillation light produced by charged particles. Detailed measurements of the MPPC gain, afterpulsing, crosstalk, dark noise, and photon detection efficiency for low light levels are reported. In order to account for the impact of the MPPC behavior on T2K physics observables, a simulation program has been developed based on these measurements. The simulation is used to predict the energy resolution of the detector.
Characterization and simulation of the response of Multi-Pixel Photon Counters to low light levels
International Nuclear Information System (INIS)
Vacheret, A.; Barker, G.J.; Dziewiecki, M.; Guzowski, P.; Haigh, M.D.; Hartfiel, B.; Izmaylov, A.; Johnston, W.; Khabibullin, M.; Khotjantsev, A.; Kudenko, Yu.; Kurjata, R.; Kutter, T.; Lindner, T.; Masliah, P.; Marzec, J.; Mineev, O.; Musienko, Yu.
2011-01-01
The calorimeter, range detector and active target elements of the T2K near detectors rely on the Hamamatsu Photonics Multi-Pixel Photon Counters (MPPCs) to detect scintillation light produced by charged particles. Detailed measurements of the MPPC gain, afterpulsing, crosstalk, dark noise, and photon detection efficiency for low light levels are reported. In order to account for the impact of the MPPC behavior on T2K physics observables, a simulation program has been developed based on these measurements. The simulation is used to predict the energy resolution of the detector.
Directory of Open Access Journals (Sweden)
Ning Zhang
2017-01-01
Full Text Available Background: Advanced ankylosing spondylitis is often associated with thoracolumbar kyphosis, resulting in an abnormal spinopelvic balance and pelvic morphology. Different osteotomy techniques have been used to correct AS deformities, unfortunnaly, not all AS patients can gain spinal sagittal balance and good horizontal vision after osteotomy. Materials and Methods: Fourteen consecutive AS patients with severe thoracolumbar kyphosis who were treated with two-level PSO were studied retrospectively. All were male with a mean age of 34.9 ± 9.6 years. The followup ranged from 1–5 years. Preoperative computer simulations using the Surgimap Spinal software were performed for all patients, and the osteotomy level and angle determined from the computer simulation were used surgically. Spinal sagittal parameters were measured preoperatively, after the computer simulation, and postoperatively and included thoracic kyphosis (TK, lumbar lordosis (LL, sagittal vertical axis (SVA, pelvic incidence, pelvic tilt (PT, and sacral slope (SS. The level of correlation between the computer simulation and postoperative parameters was evaluated, and the differences between preoperative and postoperative parameters were compared. The visual analog scale (VAS for back pain and clinical outcome was also assessed. Results: Six cases underwent PSO at L1 and L3, five cases at L2 and T12, and three cases at L3 and T12. TK was corrected from 57.8 ± 15.2° preoperatively to 45.3 ± 7.7° postoperatively (P < 0.05, LL from 9.3 ± 17.5° to −52.3 ± 3.9° (P < 0.001, SVA from 154.5 ± 36.7 to 37.8 ± 8.4 mm (P < 0.001, PT from 43.3 ± 6.1° to 18.0 ± 0.9° (P < 0.001, and SS from 0.8 ± 7.0° to 26.5 ± 10.6° (P < 0.001. The LL, VAS, and PT of the simulated two-level PSO were highly consistent with, or almost the same as, the postoperative parameters. The correlations between the computer simulations and postoperative parameters were significant. The VAS decreased
Directory of Open Access Journals (Sweden)
A. Hind
2012-08-01
Full Text Available The statistical framework of Part 1 (Sundberg et al., 2012, for comparing ensemble simulation surface temperature output with temperature proxy and instrumental records, is implemented in a pseudo-proxy experiment. A set of previously published millennial forced simulations (Max Planck Institute – COSMOS, including both "low" and "high" solar radiative forcing histories together with other important forcings, was used to define "true" target temperatures as well as pseudo-proxy and pseudo-instrumental series. In a global land-only experiment, using annual mean temperatures at a 30-yr time resolution with realistic proxy noise levels, it was found that the low and high solar full-forcing simulations could be distinguished. In an additional experiment, where pseudo-proxies were created to reflect a current set of proxy locations and noise levels, the low and high solar forcing simulations could only be distinguished when the latter served as targets. To improve detectability of the low solar simulations, increasing the signal-to-noise ratio in local temperature proxies was more efficient than increasing the spatial coverage of the proxy network. The experiences gained here will be of guidance when these methods are applied to real proxy and instrumental data, for example when the aim is to distinguish which of the alternative solar forcing histories is most compatible with the observed/reconstructed climate.
International Nuclear Information System (INIS)
Kongsoe, H.E.; Lauridsen, K.
1993-09-01
SIMON is a program for calculation of reliability and statistical analysis. The program is of the Monte Carlo type, and it is designed with high flexibility, and has a large potential for application to complex problems like reliability analyses of very large systems and of systems, where complex modelling or knowledge of special details are required. Examples of application of the program, including input and output, for reliability and statistical analysis are presented. (au) (3 tabs., 3 ills., 5 refs.)
Wei, Qun; Kim, Mi-Jung; Lee, Jong-Ha
2018-01-01
Drinking water has several advantages that have already been established, such as improving blood circulation, reducing acid in the stomach, etc. However, due to people not noticing the amount of water they consume every time they drink, most people drink less water than the recommended daily allowance. In this paper, a capacitive sensor for developing an automatic tumbler to measure water level is proposed. Different than in previous studies, the proposed capacitive sensor was separated into two sets: the main sensor for measuring the water level in the tumbler, and the reference sensor for measuring the incremental level unit. In order to confirm the feasibility of the proposed idea, and to optimize the shape of the sensor, a 3D model of the capacitive sensor with the tumbler was designed and subjected to Finite Element Analysis (FEA) simulation. According to the simulation results, the electrodes were made of copper and assembled in a tumbler manufactured by a 3D printer. The tumbler was filled with water and was subjected to experiments in order to assess the sensor's performance. The comparison of experimental results to the simulation results shows that the measured capacitance value of the capacitive sensor changed linearly as the water level varied. This proves that the proposed sensor can accurately measure the water level in the tumbler. Additionally, by use of the curve fitting method, a compensation algorithm was found to match the actual level with the measured level. The experimental results proved that the proposed capacitive sensor is able to measure the actual water level in the tumbler accurately. A digital control part with micro-processor will be designed and fixed on the bottom of the tumbler for developing a smart tumbler.
Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance
Marquez, Jessica J.; Ramirez, Margarita
2014-01-01
A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation
Energy Technology Data Exchange (ETDEWEB)
Lou, K [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Rice University, Houston, TX (United States); Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Clark, J [Rice University, Houston, TX (United States)
2014-06-01
Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is
Fluidized-bed calcination of simulated commercial high-level radioactive wastes
International Nuclear Information System (INIS)
Freeby, W.A.
1975-11-01
Work is in progress at the Idaho Chemical Processing Plant to verify process flowsheets for converting simulated commercial high-level liquid wastes to granular solids using the fluidized-bed calcination process. Primary emphasis in the series of runs reported was to define flowsheets for calcining simulated Allied-General Nuclear Services (AGNS) waste and to evaluate product properties significant to calcination, solids storage, or post treatment. Pilot-plant studies using simulated high-level acid wastes representative of those to be produced by Nuclear Fuel Services, Inc. (NFS) are also included. Combined AGNS high-level and intermediate-level waste (0.26 M Na in blend) was successfully calcined when powdered iron was added (to result in a Na/Fe mole ratio of 1.0) to the feed to prevent particle agglomeration due to sodium nitrate. Long-term runs (approximately 100 hours) showed that calcination of the combined waste is practical. Concentrated AGNS waste containing sodium at concentrations less than 0.2 M were calcined successfully; concentrated waste containing 1.13 M Na calcined successfully when powdered iron was added to the feed to suppress sodium nitrate formation. Calcination of dilute AGNS waste by conventional fluid-bed techniques was unsuccessful due to the inability to control bed particle size--both particle size and bed level decreased. Fluid-bed solidification of AGNS dilute waste at conditions in which most of the calcined solids left the calciner vessel with the off-gas was successful. In such a concept, the steady-state composition of the bed material would be approximately 22 wt percent calcined solids deposited on inert particles. Calcination of simulated NFS acid waste indicated that solidification by the fluid-bed process is feasible
International Nuclear Information System (INIS)
Calvin W. Johnson
2004-01-01
The general goal of the project is to develop and implement computer codes and input files to compute nuclear densities of state. Such densities are important input into calculations of statistical neutron capture, and are difficult to access experimentally. In particular, we will focus on calculating densities for nuclides in the mass range A ?????? 50 - 100. We use statistical spectroscopy, a moments method based upon a microscopic framework, the interacting shell model. In this report we present our progress for the past year
Cappa, Christopher D.; Jathar, Shantanu H.; Kleeman, Michael J.; Docherty, Kenneth S.; Jimenez, Jose L.; Seinfeld, John H.; Wexler, Anthony S.
2016-01-01
The influence of losses of organic vapors to chamber walls during secondary organic aerosol (SOA) formation experiments has recently been established. Here, the influence of such losses on simulated ambient SOA concentrations and properties is assessed in the UCD/CIT regional air quality model using the statistical oxidation model (SOM) for SOA. The SOM was fit to laboratory chamber data both with and without accounting for vapor wall losses following the approa...
Wang, Xingsheng; Reid, Dave; Wang, Liping; Millar, Campbell; Burenkov, Alex; Evanschitzky, Peter; Baer, Eberhard; Lorenz, Juergen; Asenov, Asen
2016-01-01
This paper presents a TCAD based design technology co-optimization (DTCO) process for 14nm SOI FinFET based SRAM, which employs an enhanced variability aware compact modeling approach that fully takes process and lithography simulations and their impact on 6T-SRAM layout into account. Realistic double patterned gates and fins and their impacts are taken into account in the development of the variability-aware compact model. Finally, global process induced variability and local statistical var...
Using Direct Sub-Level Entity Access to Improve Nuclear Stockpile Simulation Modeling
Energy Technology Data Exchange (ETDEWEB)
Parker, Robert Y. [Brigham Young Univ., Provo, UT (United States)
1999-08-01
Direct sub-level entity access is a seldom-used technique in discrete-event simulation modeling that addresses the accessibility of sub-level entity information. The technique has significant advantages over more common, alternative modeling methods--especially where hierarchical entity structures are modeled. As such, direct sub-level entity access is often preferable in modeling nuclear stockpile, life-extension issues, an area to which it has not been previously applied. Current nuclear stockpile, life-extension models were demonstrated to benefit greatly from the advantages of direct sub-level entity access. In specific cases, the application of the technique resulted in models that were up to 10 times faster than functionally equivalent models where alternative techniques were applied. Furthermore, specific implementations of direct sub-level entity access were observed to be more flexible, efficient, functional, and scalable than corresponding implementations using common modeling techniques. Common modeling techniques (''unbatch/batch'' and ''attribute-copying'') proved inefficient and cumbersome in handling many nuclear stockpile modeling complexities, including multiple weapon sites, true defect analysis, and large numbers of weapon and subsystem types. While significant effort was required to enable direct sub-level entity access in the nuclear stockpile simulation models, the enhancements were worth the effort--resulting in more efficient, more capable, and more informative models that effectively addressed the complexities of the nuclear stockpile.
Work in process level definition: a method based on computer simulation and electre tri
Directory of Open Access Journals (Sweden)
Isaac Pergher
2014-09-01
Full Text Available This paper proposes a method for defining the levels of work in progress (WIP in productive environments managed by constant work in process (CONWIP policies. The proposed method combines the approaches of Computer Simulation and Electre TRI to support estimation of the adequate level of WIP and is presented in eighteen steps. The paper also presents an application example, performed on a metalworking company. The research method is based on Computer Simulation, supported by quantitative data analysis. The main contribution of the paper is its provision of a structured way to define inventories according to demand. With this method, the authors hope to contribute to the establishment of better capacity plans in production environments.
Steady state simulation of Joule heated ceramic melter for vitrification of high level liquid waste
Energy Technology Data Exchange (ETDEWEB)
Sugilal, G; Wattal, P K; Theyyunni, T K [Process Engineering and Systems Development Division, Bhabha Atomic Research Centre, Mumbai (India); Iyer, K N [Department of Mechanical Engineering, Indian Inst. of Tech., Mumbai (India)
1994-06-01
The Joule heated ceramic melter is emerging as an attractive alternative to metallic melters for high level waste vitrification. The inherent limitations with metallic melters viz., low capacity and short melter life, are overcome in a ceramic melter which can be adopted for continuous mode of operation. The ceramic melter has the added advantage of better operational flexibility. This paper describes the three dimensional model used for simulating the complex design conditions of the ceramic melter. (author).
Steady state simulation of Joule heated ceramic melter for vitrification of high level liquid waste
International Nuclear Information System (INIS)
Sugilal, G.; Wattal, P.K.; Theyyunni, T.K.; Iyer, K.N.
1994-01-01
The Joule heated ceramic melter is emerging as an attractive alternative to metallic melters for high level waste vitrification. The inherent limitations with metallic melters viz., low capacity and short melter life, are overcome in a ceramic melter which can be adopted for continuous mode of operation. The ceramic melter has the added advantage of better operational flexibility. This paper describes the three dimensional model used for simulating the complex design conditions of the ceramic melter. (author)
Simulation of Water Level Fluctuations in a Hydraulic System Using a Coupled Liquid-Gas Model
Directory of Open Access Journals (Sweden)
Chao Wang
2015-08-01
Full Text Available A model for simulating vertical water level fluctuations with coupled liquid and gas phases is presented. The Preissmann implicit scheme is used to linearize the governing equations for one-dimensional transient flow for both liquid and gas phases, and the linear system is solved using the chasing method. Some classical cases for single liquid and gas phase transients in pipelines and networks are studied to verify that the proposed methods are accurate and reliable. The implicit scheme is extended using a dynamic mesh to simulate the water level fluctuations in a U-tube and an open surge tank without consideration of the gas phase. Methods of coupling liquid and gas phases are presented and used for studying the transient process and interaction between the phases, for gas phase limited in a chamber and gas phase transported in a pipeline. In particular, two other simplified models, one neglecting the effect of the gas phase on the liquid phase and the other one coupling the liquid and gas phases asynchronously, are proposed. The numerical results indicate that the asynchronous model performs better, and are finally applied to a hydropower station with surge tanks and air shafts to simulate the water level fluctuations and air speed.
Workload and cortisol levels in helicopter combat pilots during simulated flights
Directory of Open Access Journals (Sweden)
A. García-Mas
2016-03-01
Conclusions: Cortisol levels in saliva and workload are the usual in stress situations, and change inversely: workload increases at the end of the task, whereas the cortisol levels decrease after the simulated flight. The somatic anxiety decreases as the task is done. In contrast, when the pilots are faced with new and demanding tasks, even if they fly this type of helicopter in different conditions, the workload increases toward the end of the task. From an applied point of view, these findings should impact the tactical, physical and mental training of such pilots.
Directory of Open Access Journals (Sweden)
Vickers Andrew J
2008-11-01
Full Text Available Abstract Background A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives. Methods A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC corrected for verification bias varying both the rate and mechanism of verification. Results In a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5th – 97.5th centile range constituted as much as 60% of the possible range of AUCs for some simulations. Conclusion Screening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.
Directory of Open Access Journals (Sweden)
Dong Wang
2015-01-01
Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.
An accurate conservative level set/ghost fluid method for simulating turbulent atomization
International Nuclear Information System (INIS)
Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz
2008-01-01
This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case
International Nuclear Information System (INIS)
Pumir, Alain; Naso, Aurore
2010-01-01
A proper description of the velocity gradient tensor is crucial for understanding the dynamics of turbulent flows, in particular the energy transfer from large to small scales. Insight into the statistical properties of the velocity gradient tensor and into its coarse-grained generalization can be obtained with the help of a stochastic 'tetrad model' that describes the coarse-grained velocity gradient tensor based on the evolution of four points. Although the solution of the stochastic model can be formally expressed in terms of path integrals, its numerical determination in terms of the Monte-Carlo method is very challenging, as very few configurations contribute effectively to the statistical weight. Here, we discuss a strategy that allows us to solve the tetrad model numerically. The algorithm is based on the importance sampling method, which consists here of identifying and sampling preferentially the configurations that are likely to correspond to a large statistical weight, and selectively rejecting configurations with a small statistical weight. The algorithm leads to an efficient numerical determination of the solutions of the model and allows us to determine their qualitative behavior as a function of scale. We find that the moments of order n≤4 of the solutions of the model scale with the coarse-graining scale and that the scaling exponents are very close to the predictions of the Kolmogorov theory. The model qualitatively reproduces quite well the statistics concerning the local structure of the flow. However, we find that the model generally tends to predict an excess of strain compared to vorticity. Thus, our results show that while some physical aspects are not fully captured by the model, our approach leads to a very good description of several important qualitative properties of real turbulent flows.
International Nuclear Information System (INIS)
Jeppson, D.W.; Simpson, B.C.
1994-02-01
Nonradioactive waste simulants and initial ferrocyanide tank waste samples were characterized to assess potential safety concerns associated with ferrocyanide high-level radioactive waste stored at the Hanford Site in underground single-shell tanks (SSTs). Chemical, physical, thermodynamic, and reaction properties of the waste simulants were determined and compared to properties of initial samples of actual ferrocyanide wastes presently in the tanks. The simulants were shown to not support propagating reactions when subjected to a strong ignition source. The simulant with the greatest ferrocyanide concentration was shown to not support a propagating reaction that would involve surrounding waste because of its high water content. Evaluation of dried simulants indicated a concentration limit of about 14 wt% disodium mononickel ferrocyanide, below which propagating reactions could not occur in the ambient temperature bulk tank waste. For postulated localized hot spots where dried waste is postulated to be at an initial temperature of 130 C, a concentration limit of about 13 wt% disodium mononickel ferrocyanide was determined, below which propagating reactions could not occur. Analyses of initial samples of the presently stored ferrocyanide waste indicate that the waste tank ferrocyanide concentrations are considerably lower than the limit for propagation for dry waste and that the water content is near that of the as-prepared simulants. If the initial trend continues, it will be possible to show that runaway ferrocyanide reactions are not possible under present tank conditions. The lower ferrocyanide concentrations in actual tank waste may be due to tank waste mixing and/or degradation from radiolysis and/or hydrolysis, which may have occurred over approximately 35 years of storage
Simulation of Groundwater-Level and Salinity Changes in the Eastern Shore, Virginia
Sanford, Ward E.; Pope, Jason P.; Nelms, David L.
2009-01-01
Groundwater-level and salinity changes have been simulated with a groundwater model developed and calibrated for the Eastern Shore of Virginia. The Eastern Shore is the southern part of the Delmarva Peninsula that is occupied by Accomack and Northampton Counties in Virginia. Groundwater is the sole source of freshwater to the Eastern Shore, and demands for water have been increasing from domestic, industrial, agricultural, and public-supply sectors of the economy. Thus, it is important that the groundwater supply be protected from overextraction and seawater intrusion. The best way for water managers to use all of the information available is usually to compile this information into a numerical model that can simulate the response of the system to current and future stresses. A detailed description of the geology, hydrogeology, and historical groundwater extractions was compiled and entered into the numerical model. The hydrogeologic framework is composed of a surficial aquifer under unconfined conditions, a set of three aquifers and associated overlying confining units under confined conditions (the upper, middle, and lower Yorktown-Eastover Formation), and an underlying confining unit (the St. Marys Formation). An estimate of the location and depths of two major paleochannels was also included in the framework of the model. Total withdrawals from industrial, commercial, public-supply, and some agricultural wells were compiled from the period 1900 through 2003. Reported pumpage from these sources increased dramatically during the 1960s and 70s, up to currently about 4 million gallons per day. Domestic withdrawals were estimated on the basis of population census districts and were assigned spatially to the model on the assumption that domestic users are located close to roads. A numerical model was created using the U.S. Geological Survey (USGS) code SEAWAT to simulate both water levels and concentrations of chloride (representing salinity). The model was
Moron, Vincent; Navarra, Antonio
2000-05-01
This study presents the skill of the seasonal rainfall of tropical America from an ensemble of three 34-year general circulation model (ECHAM 4) simulations forced with observed sea surface temperature between 1961 and 1994. The skill gives a first idea of the amount of potential predictability if the sea surface temperatures are perfectly known some time in advance. We use statistical post-processing based on the leading modes (extracted from Singular Value Decomposition of the covariance matrix between observed and simulated rainfall fields) to improve the raw skill obtained by simple comparison between observations and simulations. It is shown that 36-55 % of the observed seasonal variability is explained by the simulations on a regional basis. Skill is greatest for Brazilian Nordeste (March-May), but also for northern South America or the Caribbean basin in June-September or northern Amazonia in September-November for example.
Robin M. Reich; C. Aguirre-Bravo; M.S. Williams
2006-01-01
A statistical strategy for spatial estimation and modeling of natural and environmental resource variables and indicators is presented. This strategy is part of an inventory and monitoring pilot study that is being carried out in the Mexican states of Jalisco and Colima. Fine spatial resolution estimates of key variables and indicators are outputs that will allow the...
International Nuclear Information System (INIS)
Fontolan, Juliana A.; Biral, Antonio Renato P.
2013-01-01
It is known that the distribution at time intervals of random and unrelated events leads to the Poisson distribution . This work aims to study the distribution in time intervals of events resulting from radioactive decay of atoms present in the UNICAMP where activities involving the use of ionizing radiation are performed environments . The proposal is that the distribution surveys at intervals of these events in different locations of the university are carried out through the use of a Geiger-Mueller tube . In a next step , the evaluation of distributions obtained by using non- parametric statistics (Chi- square and Kolmogorov Smirnoff) will be taken . For analyzes involving correlations we intend to use the ANOVA (Analysis of Variance) statistical tool . Measured in six different places within the Campinas , with the use of Geiger- Muller its count mode and a time window of 20 seconds was performed . Through statistical tools chi- square and Kolmogorov Smirnoff tests, using the EXCEL program , it was observed that the distributions actually refer to a Poisson distribution. Finally, the next step is to perform analyzes involving correlations using the statistical tool ANOVA
Efendiev, Yalchin R.; Iliev, Oleg; Kronsbein, C.
2013-01-01
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed
International Nuclear Information System (INIS)
Kondo, Y.
1998-01-01
The effect of phosphate ion on the filtration characteristics of solids generated in a high level liquid waste was experimentally examined. Addition of phosphate ion into the simulated HLLW induced the formation of phosphate such as zirconium phosphate and phosphomolybdic acid. The filtration rate of zirconium phosphate abruptly dropped in the midst of filtration because of a gel-cake formation on the filter surface. The denitration of the simulated HLLW contained zirconium phosphate improved the filterability of this gelatinous solid. The filtration rates of denitrated HLLW decreased with increase of the phosphate ion concentration, since the solids formed by denitration had irregular particle size and configuration in the simulated HLLW with phosphate ion. To increase the filtration rate of denitrated HLLW, a solid suspension filtration tester was designed. The solid-suspension accelerated the filtration rate only in the simulated HLLW with more than 1500 ppm phosphate ion concentration. Under this condition, the simple agitation can easily suspend the constituent solids of filter cake in the solution and a much higher filtration rate can be obtained because the filter cake is continuously swept from the filter surface by rotation of propellers. (authors)
Dalsøren, Stig B.; Myhre, Gunnar; Hodnebrog, Øivind; Myhre, Cathrine Lund; Stohl, Andreas; Pisso, Ignacio; Schwietzke, Stefan; Höglund-Isaksson, Lena; Helmig, Detlev; Reimann, Stefan; Sauvage, Stéphane; Schmidbauer, Norbert; Read, Katie A.; Carpenter, Lucy J.; Lewis, Alastair C.; Punjabi, Shalini; Wallasch, Markus
2018-03-01
Ethane and propane are the most abundant non-methane hydrocarbons in the atmosphere. However, their emissions, atmospheric distribution, and trends in their atmospheric concentrations are insufficiently understood. Atmospheric model simulations using standard community emission inventories do not reproduce available measurements in the Northern Hemisphere. Here, we show that observations of pre-industrial and present-day ethane and propane can be reproduced in simulations with a detailed atmospheric chemistry transport model, provided that natural geologic emissions are taken into account and anthropogenic fossil fuel emissions are assumed to be two to three times higher than is indicated in current inventories. Accounting for these enhanced ethane and propane emissions results in simulated surface ozone concentrations that are 5-13% higher than previously assumed in some polluted regions in Asia. The improved correspondence with observed ethane and propane in model simulations with greater emissions suggests that the level of fossil (geologic + fossil fuel) methane emissions in current inventories may need re-evaluation.
Dontje, T.; Lippert, Th.; Petkov, N.; Schilling, K.
1992-01-01
Autocorrelation becomes an increasingly important tool to verify improvements in the state of the simulational art in Latice Gauge Theory. Semi-systolic and full-systolic algorithms are presented which are intensively used for correlation computations on the Connection Machine CM-2. The
DEFF Research Database (Denmark)
Nørrelykke, Simon F; Flyvbjerg, Henrik
2011-01-01
The stochastic dynamics of the damped harmonic oscillator in a heat bath is simulated with an algorithm that is exact for time steps of arbitrary size. Exact analytical results are given for correlation functions and power spectra in the form they acquire when computed from experimental time...
This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...
International Nuclear Information System (INIS)
Kleijnen, J.P.C.; Helton, J.C.
1999-01-01
Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked
A dynamic simulation model of the Savannah River Site high level waste complex
International Nuclear Information System (INIS)
Gregory, M.V.; Aull, J.E.; Dimenna, R.A.
1994-01-01
A detailed, dynamic simulation entire high level radioactive waste complex at the Savannah River Site has been developed using SPEEDUP(tm) software. The model represents mass transfer, evaporation, precipitation, sludge washing, effluent treatment, and vitrification unit operation processes through the solution of 7800 coupled differential and algebraic equations. Twenty-seven discrete chemical constituents are tracked through the unit operations. The simultaneous simultaneous simulation of concurrent batch and continuous processes is achieved by several novel, customized SPEEDUP(tm) algorithms. Due to the model's computational burden, a high-end work station is required: simulation of a years operation of the complex requires approximately three CPU hours on an IBM RS/6000 Model 590 processor. The model will be used to develop optimal high level waste (HLW) processing strategies over a thirty year time horizon. It will be employed to better understand the dynamic inter-relationships between different HLW unit operations, and to suggest strategies that will maximize available working tank space during the early years of operation and minimize overall waste processing cost over the long-term history of the complex. Model validation runs are currently underway with comparisons against actual plant operating data providing an excellent match
A constitutive model and numerical simulation of sintering processes at macroscopic level
Wawrzyk, Krzysztof; Kowalczyk, Piotr; Nosewicz, Szymon; Rojek, Jerzy
2018-01-01
This paper presents modelling of both single and double-phase powder sintering processes at the macroscopic level. In particular, its constitutive formulation, numerical implementation and numerical tests are described. The macroscopic constitutive model is based on the assumption that the sintered material is a continuous medium. The parameters of the constitutive model for material under sintering are determined by simulation of sintering at the microscopic level using a micro-scale model. Numerical tests were carried out for a cylindrical specimen under hydrostatic and uniaxial pressure. Results of macroscopic analysis are compared against the microscopic model results. Moreover, numerical simulations are validated by comparison with experimental results. The simulations and preparation of the model are carried out by Abaqus FEA - a software for finite element analysis and computer-aided engineering. A mechanical model is defined by the user procedure "Vumat" which is developed by the first author in Fortran programming language. Modelling presented in the paper can be used to optimize and to better understand the process.
International Nuclear Information System (INIS)
Kleijnen, J.P.C.; Helton, J.C.
1999-01-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples
Statistical and thermal physics with computer applications
Gould, Harvey
2010-01-01
This textbook carefully develops the main ideas and techniques of statistical and thermal physics and is intended for upper-level undergraduate courses. The authors each have more than thirty years' experience in teaching, curriculum development, and research in statistical and computational physics. Statistical and Thermal Physics begins with a qualitative discussion of the relation between the macroscopic and microscopic worlds and incorporates computer simulations throughout the book to provide concrete examples of important conceptual ideas. Unlike many contemporary texts on the
Numerical simulation on stir system of jet ballast in high level liquid waste storage tank
International Nuclear Information System (INIS)
Lu Yingchun
2012-01-01
The stir system of jet ballast in high level liquid waste storage tank was simulation object. Gas, liquid and solid were air, sodium nitrate liquor and titanium whitening, respectively. The mathematic model based on three-fluid model and the kinetic theory of particles was established for the stir system of jet ballast in high level liquid waste storage tank. The CFD commercial software was used for solving this model. The detail flow parameters as three phase velocity, pressure and phase loadings were gained. The calculated results agree with the experimental results, so they can well define the flow behavior in the tank. And this offers a basic method for the scale-up and optimization design of the stir system of jet ballast in high level liquid waste storage tank. (author)
Simulation and controller design for an agricultural sprayer boom leveling system
Sun, Jian
2011-01-01
According to the agricultural precision requirements, the distance from sprayer nozzles to the corps should be kept between 50 cm to 70 cm. The sprayer boom also needs to be kept parallel to the field during the operation process. Thus we can guarantee the quality of the chemical droplets distribution on the crops. In this paper we introduced a sprayer boom leveling system for agricultural sprayer vehicles with electro-hydraulic auto-leveling system. The suitable hydraulic actuating cylinder and valve were selected according to the specific systemic specifications. Furthermore, a compensation controller for the electro-hydraulic system was designed based on the mathematical model. With simulations we can optimize the performance of this controller to make sure a fast leveling response to the inclined sprayer boom. © 2011 IEEE.
Spacecraft Data Simulator for the test of level zero processing systems
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
Directory of Open Access Journals (Sweden)
Sushardjanti Felasari
2003-01-01
Full Text Available This research examines the accuracy of computer programmes to simulate the illuminance level in atrium buildings compare to the measurement of those in physical models. The case was taken in atrium building with 4 types of roof i.e. pitched roof, barrel vault roof, monitor pitched roof (both monitor pitched roof and monitor barrel vault roof, and north light roof (both with north orientation and south orientation. The results show that both methods have agreement and disagreement. They show the same pattern of daylight distribution. In the other side, in terms of daylight factors, computer simulation tends to underestimate calculation compared to physical model measurement, while for average and minimum illumination, it tends to overestimate the calculation.
Simulation of dynamic pile-up corrections in the ATLAS level-1 calorimeter trigger
Energy Technology Data Exchange (ETDEWEB)
Narrias-Villar, Daniel; Wessels, Martin; Brandt, Oleg [Heidelberg University, Heidelberg (Germany)
2015-07-01
The Level-1 Calorimeter Trigger is a crucial part of the ATLAS trigger effort to select only relevant physics events out of the large number of interactions at the LHC. In Run II, in which the LHC will double the centre-of-mass energy and further increase the instantaneous luminosity, pile-up is a limiting key factor for triggering and reconstruction of relevant events. The upgraded L1Calo Multi-Chip-Modules (nMCM) will address this problem by applying dynamic pile-up corrections in real-time, of which a precise simulation is crucial for physics analysis. Therefore pile-up effects are studied in order to provide a predictable parametrised baseline correction for the Monte Carlo simulation. Physics validation plots, such as trigger rates and turn-on curves are laid out.
Kissling, Grace E; Haseman, Joseph K; Zeiger, Errol
2015-09-02
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP's statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP, 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800×0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP's decision making process, overstates the number of statistical comparisons made, and ignores the fact that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus' conclusion that such obvious responses merely "generate a hypothesis" rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. Published by Elsevier Ireland Ltd.
Robin M. Reich; Hans T. Schreuder
2006-01-01
The sampling strategy involving both statistical and in-place inventory information is presented for the natural resources project of the Green Belt area (Centuron Verde) in the Mexican state of Jalisco. The sampling designs used were a grid based ground sample of a 90x90 m plot and a two-stage stratified sample of 30 x 30 m plots. The data collected were used to...
Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.
2012-12-01
Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent
Atmospheric Dispersion Simulation for Level 3 PSA at Ulchin Nuclear Site using a PUFF model
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung Jun; Han, Seok-Jung; Jeong, Hyojoon; Jang, Seung-Cheol [KAERI, Daejeon (Korea, Republic of)
2015-05-15
Air dispersion prediction is a key in the level 3 PSA to predict radiation releases into the environment for preparing an effective strategy for an evacuation as a basis of the emergency preparedness. To predict the atmospheric dispersion accurately, the specific conditions of the radiation release location should be considered. There are various level 3 PSA tools and MACSS2 is one of the widely used level 3 PSA tools in many countries including Korea. Due to the characteristics of environmental conditions in Korea, it should be demonstrated that environmental conditions of Korea nuclear sites can be appropriately illustrated by the tool. In Korea, because all nuclear power plants are located on coasts, sea and land breezes might be a significant factor. The objectives of this work is to simulate the atmospheric dispersion for Ulchin nuclear site in Korea using a PUFF model and to generate the data which can be used for the comparison with that of PLUME model. A nuclear site has own atmospheric dispersion characteristics. Especially in Korea, nuclear sites are located at coasts and it is expected that see and land breeze effects are relatively high. In this work, the atmospheric dispersion at Ulchin nuclear site was simulated to evaluate the effect of see and land breezes in four seasons. In the simulation results, it was observed that the wind direction change with time has a large effect on atmospheric dispersion. If the result of a PLUME model is more conservative than most severe case of a PUFF model, then the PLUME model could be used for Korea nuclear sites in terms of safety assessment.
A Big Data and Learning Analytics Approach to Process-Level Feedback in Cognitive Simulations.
Pecaric, Martin; Boutis, Kathy; Beckstead, Jason; Pusic, Martin
2017-02-01
Collecting and analyzing large amounts of process data for the purposes of education can be considered a big data/learning analytics (BD/LA) approach to improving learning. However, in the education of health care professionals, the application of BD/LA is limited to date. The authors discuss the potential advantages of the BD/LA approach for the process of learning via cognitive simulations. Using the lens of a cognitive model of radiograph interpretation with four phases (orientation, searching/scanning, feature detection, and decision making), they reanalyzed process data from a cognitive simulation of pediatric ankle radiography where 46 practitioners from three expertise levels classified 234 cases online. To illustrate the big data component, they highlight the data available in a digital environment (time-stamped, click-level process data). Learning analytics were illustrated using algorithmic computer-enabled approaches to process-level feedback.For each phase, the authors were able to identify examples of potentially useful BD/LA measures. For orientation, the trackable behavior of re-reviewing the clinical history was associated with increased diagnostic accuracy. For searching/scanning, evidence of skipping views was associated with an increased false-negative rate. For feature detection, heat maps overlaid on the radiograph can provide a metacognitive visualization of common novice errors. For decision making, the measured influence of sequence effects can reflect susceptibility to bias, whereas computer-generated path maps can provide insights into learners' diagnostic strategies.In conclusion, the augmented collection and dynamic analysis of learning process data within a cognitive simulation can improve feedback and prompt more precise reflection on a novice clinician's skill development.
Plant-Level Modeling and Simulation of Used Nuclear Fuel Dissolution
Energy Technology Data Exchange (ETDEWEB)
de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2012-09-07
Plant-level modeling and simulation of a used nuclear fuel prototype dissolver is presented. Emphasis is given in developing a modeling and simulation approach to be explored by other processes involved in the recycle of used fuel. The commonality concepts presented in a previous communication were used to create a model and realize its software module. An initial model was established based on a theory of chemical thermomechanical network transport outlined previously. A software module prototype was developed with the required external behavior and internal mathematical structure. Results obtained demonstrate the generality of the design approach and establish an extensible mathematical model with its corresponding software module for a wide range of dissolvers. Scale up numerical tests were made varying the type of used fuel (breeder and light-water reactors) and the capacity of dissolution (0.5 t/d to 1.7 t/d). These tests were motivated by user requirements in the area of nuclear materials safeguards. A computer module written in high-level programing languages (MATLAB and Octave) was developed, tested, and provided as open-source code (MATLAB) for integration into the Separations and Safeguards Performance Model application in development at Sandia National Laboratories. The modeling approach presented here is intended to serve as a template for a rational modeling of all plant-level modules. This will facilitate the practical application of the commonality features underlying the unifying network transport theory proposed recently. In addition, by example, this model describes, explicitly, the needed data from sub-scale models, and logical extensions for future model development. For example, from thermodynamics, an off-line simulation of molecular dynamics could quantify partial molar volumes for the species in the liquid phase; this simulation is currently at reach for high-performance computing. From fluid mechanics, a hold-up capacity function is needed
Rohatgi, Vijay K
2003-01-01
Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth
Levine-Wissing, Robin
2012-01-01
All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep
Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media
Chueh, C.C.
2010-10-01
An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.
System-level modeling and simulation of the cell culture microfluidic biochip ProCell
DEFF Research Database (Denmark)
Minhass, Wajid Hassan; Pop, Paul; Madsen, Jan
2010-01-01
Microfluidic biochips offer a promising alternative to a conventional biochemical laboratory. There are two technologies for the microfluidic biochips: droplet-based and flow-based. In this paper we are interested in flow-based microfluidic biochips, where the liquid flows continuously through pre......-defined micro-channels using valves and pumps. We present an approach to the system-level modeling and simulation of a cell culture microfluidic biochip called ProCell, Programmable Cell Culture Chip. ProCell contains a cell culture chamber, which is envisioned to run 256 simultaneous experiments (viewed...
Abstract Radio Resource Management Framework for System Level Simulations in LTE-A Systems
DEFF Research Database (Denmark)
Fotiadis, Panagiotis; Viering, Ingo; Zanier, Paolo
2014-01-01
This paper provides a simple mathematical model of different packet scheduling policies in Long Term Evolution- Advanced (LTE-A) systems, by investigating the performance of Proportional Fair (PF) and the generalized cross-Component Carrier scheduler from a theoretical perspective. For that purpose......, an abstract Radio Resource Management (RRM) framework has been developed and tested for different ratios of users with Carrier Aggregation (CA) capabilities. The conducted system level simulations confirm that the proposed model can satisfactorily capture the main properties of the aforementioned scheduling...
Simulation of core-level binding energy shifts in germanium-doped lead telluride crystals
International Nuclear Information System (INIS)
Zyubin, A.S.; Dedyulin, S.N.; Yashina, L.V.; Shtanov, V.I.
2007-01-01
To simulate the changes in core-level binding energies in germanium-doped lead telluride, cluster calculations of the changes in the electrostatic potential at the corresponding centers have been performed. Different locations of the Ge atom in the crystal bulk have been considered: near vacancies, near another dopant site, and near the surface. For calculating the potential in the clusters that model the bulk and the surface of the lead telluride crystal (c-PbTe), the electron density obtained in the framework of the Hartree-Fock and hybrid density functional theory (DFT) methods has been used [ru
Segregation of the elements of the platinum group in a simulated high-level waste glass
International Nuclear Information System (INIS)
Mitamura, H.; Banba, T.; Kamizono, H.; Kiriyama, Y.; Kumata, M.; Murakami, T.; Tashiro, S.
1983-01-01
Segregation of the elements of the platinum group occurred during vitrification of the borosilicate glass containing 20 wt% simulated high-level waste oxides. The segregated materials were composed of two crystalline phases: one was the solid solution of ruthenium and rhodium dioxides and the other was that of palladium and rhodium metals also with tellurium. The segregated materials were not distributed homogeneously throughout the glass: (i) on the surface of the glass, there occurred palladium, rhodium and tellurium alloy alone; and (ii) at the inner part of the glass, the agglomerates of the two phases were concentrated in one part and dispersed in the other
Pilot scale processing of simulated Savannah River Site high level radioactive waste
International Nuclear Information System (INIS)
Hutson, N.D.; Zamecnik, J.R.; Ritter, J.A.; Carter, J.T.
1991-01-01
The Savannah River Laboratory operates the Integrated DWPF Melter System (IDMS), which is a pilot-scale test facility used in support of the start-up and operation of the US Department of Energy's Defense Waste Processing Facility (DWPF). Specifically, the IDMS is used in the evaluation of the DWPF melter and its associated feed preparation and offgass treatment systems. This article provides a general overview of some of the test work which has been conducted in the IDMS facility. The chemistry associated with the chemical treatment of the sludge (via formic acid adjustment) is discussed. Operating experiences with simulated sludge containing high levels of nitrite, mercury, and noble metals are summarized
Isothermal crystallization kinetics in simulated high-level nuclear waste glass
International Nuclear Information System (INIS)
Vienna, J.D.; Hrma, P.; Smith, D.E.
1997-01-01
Crystallization kinetics of a simulated high-level waste (HLW) glass were measured and modelled. Kinetics of acmite growth in the standard HW39-4 glass were measured using the isothermal method. A time-temperature-transformation (TTT) diagram was generated from these data. Classical glass-crystal transformation kinetic models were empirically applied to the crystallization data. These models adequately describe the kinetics of crystallization in complex HLW glasses (i.e., RSquared = 0.908). An approach to measurement, fitting, and use of TTT diagrams for prediction of crystallinity in a HLW glass canister is proposed
Multispectral simulation environment for modeling low-light-level sensor systems
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low
Ngada, Narcisse
2015-06-15
The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.
International Nuclear Information System (INIS)
Steenbakkers, Rudi J A; Schieber, Jay D; Tzoumanekas, Christos; Li, Ying; Liu, Wing Kam; Kröger, Martin
2014-01-01
We present a method to map the full equilibrium distribution of the primitive-path (PP) length, obtained from multi-chain simulations of polymer melts, onto a single-chain mean-field ‘target’ model. Most previous works used the Doi–Edwards tube model as a target. However, the average number of monomers per PP segment, obtained from multi-chain PP networks, has consistently shown a discrepancy of a factor of two with respect to tube-model estimates. Part of the problem is that the tube model neglects fluctuations in the lengths of PP segments, the number of entanglements per chain and the distribution of monomers among PP segments, while all these fluctuations are observed in multi-chain simulations. Here we use a recently proposed slip-link model, which includes fluctuations in all these variables as well as in the spatial positions of the entanglements. This turns out to be essential to obtain qualitative and quantitative agreement with the equilibrium PP-length distribution obtained from multi-chain simulations. By fitting this distribution, we are able to determine two of the three parameters of the model, which govern its equilibrium properties. This mapping is executed for four different linear polymers and for different molecular weights. The two parameters are found to depend on chemistry, but not on molecular weight. The model predicts a constant plateau modulus minus a correction inversely proportional to molecular weight. The value for well-entangled chains, with the parameters determined ab initio, lies in the range of experimental data for the materials investigated. (paper)
Alternative Chemical Cleaning Methods for High Level Waste Tanks: Simulant Studies
Energy Technology Data Exchange (ETDEWEB)
Rudisill, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); King, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hay, M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-11-19
Solubility testing with simulated High Level Waste tank heel solids has been conducted in order to evaluate two alternative chemical cleaning technologies for the dissolution of sludge residuals remaining in the tanks after the exhaustion of mechanical cleaning and sludge washing efforts. Tests were conducted with non-radioactive pure phase metal reagents, binary mixtures of reagents, and a Savannah River Site PUREX heel simulant to determine the effectiveness of an optimized, dilute oxalic/nitric acid cleaning reagent and pure, dilute nitric acid toward dissolving the bulk non-radioactive waste components. A focus of this testing was on minimization of oxalic acid additions during tank cleaning. For comparison purposes, separate samples were also contacted with pure, concentrated oxalic acid which is the current baseline chemical cleaning reagent. In a separate study, solubility tests were conducted with radioactive tank heel simulants using acidic and caustic permanganate-based methods focused on the “targeted” dissolution of actinide species known to be drivers for Savannah River Site tank closure Performance Assessments. Permanganate-based cleaning methods were evaluated prior to and after oxalic acid contact.
International Nuclear Information System (INIS)
Ritter, J.A.; Hutson, N.D.; Zamecnik, J.R.; Carter, J.T.
1991-01-01
The Integrated DWPF Melter System (IDMS), operated by the Savannah River Laboratory, is a pilot scale facility used in support of the start-up and operation of the Department of Energy's Defense Waste Processing Facility. The IDMS has successfully demonstrated, on an engineering scale (one-fifth), that simulated high level radioactive waste (HLW) sludge can be chemically treated with formic acid to adjust both its chemical and physical properties, and then blended with simulated precipitate hydrolysis aqueous (PHA) product and borosilicate glass frit to produce a melter feed which can be processed into a durable glass product. The simulated sludge, PHA and frit were blended, based on a product composition program, to optimize the loading of the waste glass as well as to minimize those components which can cause melter processing and/or glass durability problems. During all the IDMS demonstrations completed thus far, the melter feed and the resulting glass that has been produced met all the required specifications, which is very encouraging to future DWPF operations. The IDMS operations also demonstrated that the volatile components of the melter feed (e.g., mercury, nitrogen and carbon, and, to a lesser extent, chlorine, fluorine and sulfur) did not adversely affect the melter performance or the glass product
DEFF Research Database (Denmark)
Andersen, J.S.; Bedaux, J.J.M.; Kooijman, S.A.L.M.
2000-01-01
This paper describes the influence of design characteristics on the statistical inference for an ecotoxicological hazard-based model using simulated survival data. The design characteristics of interest are the number and spacing of observations (counts) in time, the number and spacing of exposure...... concentrations (within c(min) and c(max)), and the initial number of individuals at time 0 in each concentration. A comparison of the coverage probabilities for confidence limits arising from the profile-likelihood approach and the Wald-based approach is carried out. The Wald-based approach is very sensitive...
Guénault, Tony
2007-01-01
In this revised and enlarged second edition of an established text Tony Guénault provides a clear and refreshingly readable introduction to statistical physics, an essential component of any first degree in physics. The treatment itself is self-contained and concentrates on an understanding of the physical ideas, without requiring a high level of mathematical sophistication. A straightforward quantum approach to statistical averaging is adopted from the outset (easier, the author believes, than the classical approach). The initial part of the book is geared towards explaining the equilibrium properties of a simple isolated assembly of particles. Thus, several important topics, for example an ideal spin-½ solid, can be discussed at an early stage. The treatment of gases gives full coverage to Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Towards the end of the book the student is introduced to a wider viewpoint and new chapters are included on chemical thermodynamics, interactions in, for exam...
Directory of Open Access Journals (Sweden)
Mark James Abraham
2015-09-01
Full Text Available GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. These work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. The latest best-in-class compressed trajectory storage format is supported.
Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics
Dowding, Irene; Haufe, Stefan
2018-01-01
Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885
Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel
2016-08-01
In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving teaching materials and assess their effectiveness in improving students' ability to solve problems in university-level physics. Firstly, we analyze the effect of using simulation-based materials in the development of students' skills in employing procedures that are typically used in the scientific method of problem-solving. We found that a significant percentage of the experimental students used expert-type scientific procedures such as qualitative analysis of the problem, making hypotheses, and analysis of results. At the end of the course, only a minority of the students persisted with habits based solely on mathematical equations. Secondly, we compare the effectiveness in terms of problem-solving of the experimental group students with the students who are taught conventionally. We found that the implementation of the problem-solving strategy improved experimental students' results regarding obtaining a correct solution from the academic point of view, in standard textbook problems. Thirdly, we explore students' satisfaction with simulation-based problem-solving teaching materials and we found that the majority appear to be satisfied with the methodology proposed and took on a favorable attitude to learning problem-solving. The research was carried out among first-year Engineering Degree students.
GAO, J.; White, M. J.; Bieger, K.; Yen, H.; Arnold, J. G.
2017-12-01
Over the past 20 years, the Soil and Water Assessment Tool (SWAT) has been adopted by many researches to assess water quantity and quality in watersheds around the world. As the demand increases in facilitating model support, maintenance, and future development, the SWAT source code and data have undergone major modifications over the past few years. To make the model more flexible in terms of interactions of spatial units and processes occurring in watersheds, a completely revised version of SWAT (SWAT+) was developed to improve SWAT's ability in water resource modelling and management. There are only several applications of SWAT+ in large watersheds, however, no study pays attention to validate the new model at field level and assess its performance. To test the basic hydrologic function of SWAT+, it was implemented in five field cases across five states in the U.S. and compared the SWAT+ created results with that from the previous models at the same fields. Additionally, an automatic calibration tool was used to test which model is easier to be calibrated well in a limited number of parameter adjustments. The goal of the study was to evaluate the performance of SWAT+ in simulating stream flow on field level at different geographical locations. The results demonstrate that SWAT+ demonstrated similar performance with previous SWAT model, but the flexibility offered by SWAT+ via the connection of different spatial objects can result in a more accurate simulation of hydrological processes in spatial, especially for watershed with artificial facilities. Autocalibration shows that SWAT+ is much easier to obtain a satisfied result compared with the previous SWAT. Although many capabilities have already been enhanced in SWAT+, there exist inaccuracies in simulation. This insufficiency will be improved with advancements in scientific knowledge on hydrologic process in specific watersheds. Currently, SWAT+ is prerelease, and any errors are being addressed.
Pore solution chemistry of simulated low-level liquid waste incorporated in cement grouts
International Nuclear Information System (INIS)
Kruger, A.A.
1995-12-01
Expressed pore solutions from simulated low level liquid waste cement grouts cured at room temperature, 50 degree C and 90 degree C for various duration were analyzed by standard chemical methods and ion chromatography. The solid portions of the grouts were formulated with portland cement, fly ash, slag, and attapulgite clay in the ratios of 3:3:3:1. Two different solutions simulating off-gas condensates expected from vitrification of Hanford low level tank wastes were made. One is highly alkaline and contains the species Na + , P0 4 3- , N0 2 - , NO 3 - and OH - . The other is carbonated and contains the species, Na + , PO 4 3- , NO 2 - , NO 3 - , and CO 3 2- . In both cases phosphate rapidly disappeared from the pore solution, leaving behind sodium in the form of hydroxide. The carbonates were also removed from the pore solution to form calcium carbonate and possibly calcium monocarboaluminate. These reactions resulted in the increase of hydroxide ion concentration in the early period. Subsequently there was a significant reduction OH - and Na + ion concentrations. In contrast high concentration of N0 2 - and N0 3 - were retained in the pore solution indefinitely
Simulating the effects of turbocharging on the emission levels of a gasoline engine
Directory of Open Access Journals (Sweden)
Amir Reza Mahmoudi
2017-12-01
Full Text Available The main objective of this work was to respond to the global concern for the rise of the emissions and the necessity of preventing them to form rather than dealing with their after-effects. Therefore, the production levels of four main emissions, namely NOx, CO2, CO and UHC in gasoline engine of Nissan Maxima 1994 is assessed via 1-D simulation with the GT-Power code. Then, a proper matching of turbine-compressor is carried out to propose a turbocharger for the engine, and the resultant emissions are compared to the naturally aspirated engine. It is found that the emission levels of NOx, CO, and CO2 are higher in terms of their concentration in the exhaust fume of the turbocharged engine, in comparison with the naturally aspirated engine. However, at the same time, the brake power and the brake specific emissions produced by the turbocharged engine are respectively higher and lower than those of the naturally aspirated engine. Therefore, it is concluded that, for a specific application, turbocharging provides the chance to achieve the performance of a potential naturally aspirated engine while producing lower emissions. Keywords: Emission, Gasoline SI engine, Turbocharging, GT-Power, 1-D simulation, Brake specific
A mass conserving level set method for detailed numerical simulation of liquid atomization
Energy Technology Data Exchange (ETDEWEB)
Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)
2015-10-01
An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.
Simulating Radionuclide Migrations of Low-level Wastes in Nearshore Environment
Lu, C. C.; Li, M. H.; Chen, J. S.; Yeh, G. T.
2016-12-01
Tunnel disposal into nearshore mountains was tentatively selected as one of final disposal sites for low-level wastes in Taiwan. Safety assessment on radionuclide migrations in far-filed may involve geosphere processes under coastal environments and into nearshore ocean. In this study the 3-D HYDROFEOCHE5.6 numerical model was used to perform simulations of groundwater flow and radionuclide transport with decay chains. Domain of interest on the surface includes nearby watersheds delineated by digital elevation models and nearshore seabed. As deep as 800 m below the surface and 400 m below sea bed were considered for simulations. The disposal site was located at 200m below the surface. Release rates of radionuclides from near-field was estimated by analytical solutions of radionuclide diffusion with decay out of engineered barriers. Far-field safety assessments were performed starting from the release of radionuclides out of engineered barriers to a time scale of 10,000 years. Sensitivity analyses of geosphere and transport parameters were performed to improve our understanding of safety on final disposal of low-level waste in nearshore environments.
Hincapié-Palacio, Doracelly; Ospina-Giraldo, Juan; Gómez-Arias, Rubén D; Uyi-Afuwape, Anthony; Chowell-Puente, Gerardo
2010-02-01
The study was aimed at comparing measles and rubella disease elimination levels in a homogeneous and heterogeneous population according to socioeconomic status with interactions amongst low- and high-income individuals and diversity in the average number of contacts amongst them. Effective reproductive rate simulations were deduced from a susceptibleinfected- recovered (SIR) mathematical model according to different immunisation rates using measles (1980 and 2005) and rubella (1998 and 2005) incidence data from Latin-America and the Caribbean. Low- and high-income individuals' social interaction and their average number of contacts were analysed by bipartite random network analysis. MAPLE 12 (Maplesoft Inc, Ontario Canada) software was used for making the simulations. The progress made in eliminating both diseases between both periods of time was reproduced in the socially-homogeneous population. Measles (2005) would be eliminated in high- and low-income groups; however, it would only be achieved in rubella (2005) if there were a high immunity rate amongst the low-income group. If the average number of contacts were varied, then rubella would not be eliminated, even with a 95 % immunity rate. Monitoring the elimination level in diseases like measles and rubella requires that socio-economic status be considered as well as the population's interaction pattern. Special attention should be paid to communities having diversity in their average number of contacts occurring in confined spaces such as displaced communities, prisons, educational establishments, or hospitals.
International Nuclear Information System (INIS)
Chojnacki, E.; Benoit, J.P.
2007-01-01
Best estimate computer codes are increasingly used in nuclear industry for the accident management procedures and have been planned to be used for the licensing procedures. Contrary to conservative codes which are supposed to give penalizing results, best estimate codes attempt to calculate accidental transients in a realistic way. It becomes therefore of prime importance, in particular for technical organization as IRSN in charge of safety assessment, to know the uncertainty on the results of such codes. Thus, CSNI has sponsored few years ago (published in 1998) the Uncertainty Methods Study (UMS) program on uncertainty methodologies used for a SBLOCA transient (LSTF-CL-18) and is now supporting the BEMUSE program for a LBLOCA transient (LOFT-L2-5). The large majority of BEMUSE participants (9 out of 10) use uncertainty methodologies based on a probabilistic modelling and all of them use Monte-Carlo simulations to propagate the uncertainties through their computer codes. Also, all of 'probabilistic participants' intend to use order statistics to determine the sampling size of the Monte-Carlo simulation and to derive the uncertainty ranges associated to their computer calculations. The first aim of this paper is to remind the advantages and also the assumptions of the probabilistic modelling and more specifically of order statistics (as Wilks' formula) in uncertainty methodologies. Indeed Monte-Carlo methods provide flexible and extremely powerful techniques for solving many of the uncertainty propagation problems encountered in nuclear safety analysis. However it is important to keep in mind that probabilistic methods are data intensive. That means, probabilistic methods cannot produce robust results unless a considerable body of information has been collected. A main interest of the use of order statistics results is to allow to take into account an unlimited number of uncertain parameters and, from a restricted number of code calculations to provide statistical
Spatial statistical analysis of contamination level of 241Am and 239Pu Thule, North-West Greenland
International Nuclear Information System (INIS)
Strodl Andersen, J.
2011-10-01
A spatial analysis of data on radioactive pollution on land at Thule, North-West Greenland is presented. The data comprises levels of 241 Am and 239,240 Pu on land. Maximum observed level of 241 Am is 2.8x10 5 Bq m -2 . Highest levels were observed near Narsaarsuk. This area was also sampled most intensively. In Groennedal the maximum observed level of 241 Am is 1.9Oe10 4 Bq m -2 . Prediction of the overall amount of 241 Am and 239,240 Pu is based on grid points within the range from the nearest measurement location. The overall amount is therefore highly dependent on the model. Under the optimal spatial model for Narsaarsuk, within the area of prediction, the predicted total amount of 241 Am is 45 GBq and the predicted total amount of 239,240 Pu is 270 GBq. (Author)
Spatial statistical analysis of contamination level of 241Am and 239Pu, Thule, North-West Greenland
Energy Technology Data Exchange (ETDEWEB)
Strodl Andersen, J. (JSA EnviroStat (Denmark))
2011-10-15
A spatial analysis of data on radioactive pollution on land at Thule, North-West Greenland is presented. The data comprises levels of 241Am and 239,240Pu on land. Maximum observed level of 241Am is 2.8x105 Bq m-2. Highest levels were observed near Narsaarsuk. This area was also sampled most intensively. In Groennedal the maximum observed level of 241Am is 1.9-104 Bq m-2. Prediction of the overall amount of 241Am and 239,240Pu is based on grid points within the range from the nearest measurement location. The overall amount is therefore highly dependent on the model. Under the optimal spatial model for Narsaarsuk, within the area of prediction, the predicted total amount of 241Am is 45 GBq and the predicted total amount of 239,240Pu is 270 GBq. (Author)
International Nuclear Information System (INIS)
Patinet, S.
2009-12-01
The glide of edge and screw dislocation in solid solution is modeled through atomistic simulations in two model alloys of Ni(Al) and Al(Mg) described within the embedded atom method. Our approach is based on the study of the elementary interaction between dislocations and solutes to derive solid solution hardening of face centered cubic binary alloys. We identify the physical origins of the intensity and range of the interaction between a dislocation and a solute atom. The thermally activated crossing of a solute atom by a dislocation is studied at the atomistic scale. We show that hardening of edge and screw segments are similar. We develop a line tension model that reproduces quantitatively the atomistic calculations of the flow stress. We identify the universality class to which the dislocation depinning transition in solid solution belongs. (author)
Varouchakis, Emmanouil; Hristopulos, Dionissios
2015-04-01
Space-time geostatistical approaches can improve the reliability of dynamic groundwater level models in areas with limited spatial and temporal data. Space-time residual Kriging (STRK) is a reliable method for spatiotemporal interpolation that can incorporate auxiliary information. The method usually leads to an underestimation of the prediction uncertainty. The uncertainty of spatiotemporal models is usually estimated by determining the space-time Kriging variance or by means of cross validation analysis. For de-trended data the former is not usually applied when complex spatiotemporal trend functions are assigned. A Bayesian approach based on the bootstrap idea and sequential Gaussian simulation are employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. These stochastic modelling approaches produce multiple realizations, rank the prediction results on the basis of specified criteria and capture the range of the uncertainty. The correlation of the spatiotemporal residuals is modeled using a non-separable space-time variogram based on the Spartan covariance family (Hristopulos and Elogne 2007, Varouchakis and Hristopulos 2013). We apply these simulation methods to investigate the uncertainty of groundwater level variations. The available dataset consists of bi-annual (dry and wet hydrological period) groundwater level measurements in 15 monitoring locations for the time period 1981 to 2010. The space-time trend function is approximated using a physical law that governs the groundwater flow in the aquifer in the presence of pumping. The main objective of this research is to compare the performance of two simulation methods for prediction uncertainty estimation. In addition, we investigate the performance of the Spartan spatiotemporal covariance function for spatiotemporal geostatistical analysis. Hristopulos, D.T. and Elogne, S.N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs
Fractal and variability analysis of simulations in ozone level due to oxides of nitrogen and sulphur
Bhardwaj, Rashmi; Pruthi, Dimple
2017-10-01
Air pollution refers to the release of pollutants into the air. These pollutants are detrimental to human the planet as a whole. Apart from causing respiratory infections and pulmonary disorders, rising levels of Nitrogen Dioxide is worsening ozone pollution. Formation of Ground-level ozone involves nitrogen oxides and volatile gases in the sunlight. Volatile gases are emitted from vehicles primarily. Ozone is harmful gas and its exposure can trigger serious health effects as it damages lung tissues. In order to decrease the level of ozone, level of oxides leading to ozone formation has to be dealt with. This paper deals with the simulations in ozone due to oxides of nitrogen and sulphur. The data from Central Pollution Control Board shows positive correlation for ozone with oxides of sulphur and nitrogen for RK Puram, Delhi in India where high concentration of ozone has been found. The correlation between ozone and sulphur, nitrogen oxides is moderate during summer while weak during winters. Ozone with nitrogen and sulphur dioxide follow persistent behavior as Hurst exponent is between 0.5 and 1. The fractal dimension for Sulphur dioxide is 1.4957 indicating the Brownian motion. The behavior of ozone is unpredictable as index of predictability is close to zero.
International Nuclear Information System (INIS)
Qayyum, M.; Zaman, W.U.; Rehman, R.; Ahmad, B.; Ahmad, M.; Ali, S.; Murtaza, S
2013-01-01
Increasing fluoride levels in drinking water of fluorinated areas of world leading to fluorosis. For bio-monitoring of fluorosis patients, fluoride levels were determined in drinking water and human urine samples of different individuals having dental fluorosis and bony deformities from fluorotic area of Punjab (Sham Ki Bhatiyan, Pakistan) and then compared with reference samples of non fluorotic area (Queens Road, Lahore, Pakistan) using ion selective electrode methodology. Fluoride levels in fluorinated area differ significantly from control group (p < 0.05). In drinking water and human urine samples, fluoride levels in fluorinated areas were: 136.192 +- 67.836 and 94.484 +- 36.572 micro molL/sup -1/ respectively, whereas in control samples, fluoride concentrations were: 19.306 +- 2.109 and 47.154 +- 22.685 micro molL/sup -1/ in water and urine samples correspondingly. Pearson's correlation data pointed out the fact that that human urine and water fluoride concentrations have a significant positive dose response relationship with the prevalence of dental and skeletal fluorosis in fluorotic areas having higher fluoride levels in drinking water. (author)
International Nuclear Information System (INIS)
Aggarwal, P.K.; Pandey, G.K.; Malathi, N.; Arun, A.D.; Ananthanarayanan, R.; Banerjee, I.; Sahoo, P.; Padmakumar, G.; Murali, N.
2012-01-01
Highlights: ► An innovative approach for measurement of water level fluctuation is presented. ► Measurement was conducted with a PC based pulsating type level sensor. ► Deployed the technique in monitoring level fluctuation in PFBR simulated facility. ► The technique helped in validation of hot pool design of PFBR, India. - Abstract: A high resolution measurement technique for rapid and accurate monitoring of water level using an in-house built pulsating conductance monitoring device is presented. The technique has the capability of online monitoring of any sudden shift in water level in a reservoir which is subjected to rapid fluctuations due to any external factor. We have deployed this novel technique for real time monitoring of water level fluctuations in a specially designed ¼ scale model of the Prototype Fast Breeder Reactor (PFBR) at Kalpakkam, India. The water level measurements in various locations of the simulated test facility were carried out in different experimental campaigns with and without inclusion of thermal baffles to it in specific operating conditions as required by the reactor designers. The amplitudes and the frequencies of fluctuations with required statistical parameters in hot water pool of the simulated model were evaluated from the online time versus water level plot in more convenient way using system software package. From experimental results it is computed that the maximum free level fluctuation in the hot pool of PFBR with baffle plates provided on the inner vessel is 30 mm which is considerably less than the value (∼82 mm) obtained without having any baffle plates. The present work provided useful information for assessment of appropriate design which would be adopted in the PFBR for safe operation of the reactor.
Some like it hot: medical student views on choosing the emotional level of a simulation.
Lefroy, Janet; Brosnan, Caragh; Creavin, Sam
2011-04-01
This study aimed to determine the impact of giving junior medical students control over the level of emotion expressed by a simulated patient (SP) in a teaching session designed to prepare students to handle emotions when interviewing real patients on placements. Year 1 medical students at Keele University School of Medicine were allowed to set the degree of emotion to be displayed by the SP in their first 'emotional interview'. This innovation was evaluated by mixed methods in two consecutive academic years as part of an action research project, along with other developments in a new communications skills curriculum. Questionnaires were completed after the first and second iterations by students, tutors and SPs. Sixteen students also participated in evaluative focus group discussions at the end of Year 1. Most students found the 'emotion-setting switch' helpful, both when interviewing the SP and when observing. Student-interviewers were helped by the perception that they had control over the difficulty of the task. Student-observers found it helpful to see the different levels of emotion and to think about how they might empathise with patients. By contrast, some students found the 'control switch' unnecessary or even unhelpful. These students felt that challenge was good for them and preferred not to be given the option of reducing it. The emotional level control was a useful innovation for most students and may potentially be used in any first encounter with challenging simulation. We suggest that it addresses innate needs for competence and autonomy. The insights gained enable us to suggest ways of building the element of choice into such sessions. The disadvantages of choice highlighted by some students should be surmountable by tutor 'scaffolding' of the learning for both student-interviewers and student-observers. © Blackwell Publishing Ltd 2011.
Energy Technology Data Exchange (ETDEWEB)
Boulanger, Jean-Philippe [LODYC, UMR CNRS/IRD/UPMC, Tour 45-55/Etage 4/Case 100, UPMC, Paris Cedex 05 (France); University of Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina); Martinez, Fernando; Segura, Enrique C. [University of Buenos Aires, Departamento de Computacion, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina)
2007-02-15
Evaluating the response of climate to greenhouse gas forcing is a major objective of the climate community, and the use of large ensemble of simulations is considered as a significant step toward that goal. The present paper thus discusses a new methodology based on neural network to mix ensemble of climate model simulations. Our analysis consists of one simulation of seven Atmosphere-Ocean Global Climate Models, which participated in the IPCC Project and provided at least one simulation for the twentieth century (20c3m) and one simulation for each of three SRES scenarios: A2, A1B and B1. Our statistical method based on neural networks and Bayesian statistics computes a transfer function between models and observations. Such a transfer function was then used to project future conditions and to derive what we would call the optimal ensemble combination for twenty-first century climate change projections. Our approach is therefore based on one statement and one hypothesis. The statement is that an optimal ensemble projection should be built by giving larger weights to models, which have more skill in representing present climate conditions. The hypothesis is that our method based on neural network is actually weighting the models that way. While the statement is actually an open question, which answer may vary according to the region or climate signal under study, our results demonstrate that the neural network approach indeed allows to weighting models according to their skills. As such, our method is an improvement of existing Bayesian methods developed to mix ensembles of simulations. However, the general low skill of climate models in simulating precipitation mean climatology implies that the final projection maps (whatever the method used to compute them) may significantly change in the future as models improve. Therefore, the projection results for late twenty-first century conditions are presented as possible projections based on the &apos
Åberg Lindell, M.; Andersson, P.; Grape, S.; Hellesen, C.; Håkansson, A.; Thulin, M.
2018-03-01
This paper investigates how concentrations of certain fission products and their related gamma-ray emissions can be used to discriminate between uranium oxide (UOX) and mixed oxide (MOX) type fuel. Discrimination of irradiated MOX fuel from irradiated UOX fuel is important in nuclear facilities and for transport of nuclear fuel, for purposes of both criticality safety and nuclear safeguards. Although facility operators keep records on the identity and properties of each fuel, tools for nuclear safeguards inspectors that enable independent verification of the fuel are critical in the recovery of continuity of knowledge, should it be lost. A discrimination methodology for classification of UOX and MOX fuel, based on passive gamma-ray spectroscopy data and multivariate analysis methods, is presented. Nuclear fuels and their gamma-ray emissions were simulated in the Monte Carlo code Serpent, and the resulting data was used as input to train seven different multivariate classification techniques. The trained classifiers were subsequently implemented and evaluated with respect to their capabilities to correctly predict the classes of unknown fuel items. The best results concerning successful discrimination of UOX and MOX-fuel were acquired when using non-linear classification techniques, such as the k nearest neighbors method and the Gaussian kernel support vector machine. For fuel with cooling times up to 20 years, when it is considered that gamma-rays from the isotope 134Cs can still be efficiently measured, success rates of 100% were obtained. A sensitivity analysis indicated that these methods were also robust.
PAAR, [No Value; VORKAPIC, D; DIERPERINK, AEL
1992-01-01
We study the fluctuation properties of 0+ levels in rotational nuclei using the framework of SU(3) dynamical symmetry of the interacting boson model. Computations of Poincare sections for SU(3) dynamical symmetry and its breaking confirm the expected relation between dynamical symmetry and classical
A large, multi-laboratory microcosm study was performed to select amendments for supporting reductive dechlorination of high levels of trichloroethylene (TCE) found at an industrial site in the United Kingdom (UK) containing dense non-aqueous phase liquid (DNAPL) TCE. The study ...
International Nuclear Information System (INIS)
Thörnqvist, Sara; Hysing, Liv B.; Zolnay, Andras G.; Söhn, Matthias; Hoogeman, Mischa S.; Muren, Ludvig P.; Bentzen, Lise; Heijmen, Ben J.M.
2013-01-01
Background and purpose: Deformation and correlated target motion remain challenges for margin recipes in radiotherapy (RT). This study presents a statistical deformable motion model for multiple targets and applies it to margin evaluations for locally advanced prostate cancer i.e. RT of the prostate (CTV-p), seminal vesicles (CTV-sv) and pelvic lymph nodes (CTV-ln). Material and methods: The 19 patients included in this study, all had 7–10 repeat CT-scans available that were rigidly aligned with the planning CT-scan using intra-prostatic implanted markers, followed by deformable registrations. The displacement vectors from the deformable registrations were used to create patient-specific statistical motion models. The models were applied in treatment simulations to determine probabilities for adequate target coverage, e.g. by establishing distributions of the accumulated dose to 99% of the target volumes (D 99 ) for various CTV–PTV expansions in the planning-CTs. Results: The method allowed for estimation of the expected accumulated dose and its variance of different DVH parameters for each patient. Simulations of inter-fractional motion resulted in 7, 10, and 18 patients with an average D 99 >95% of the prescribed dose for CTV-p expansions of 3 mm, 4 mm and 5 mm, respectively. For CTV-sv and CTV-ln, expansions of 3 mm, 5 mm and 7 mm resulted in 1, 11 and 15 vs. 8, 18 and 18 patients respectively with an average D 99 >95% of the prescription. Conclusions: Treatment simulations of target motion revealed large individual differences in accumulated dose mainly for CTV-sv, demanding the largest margins whereas those required for CTV-p and CTV-ln were comparable
Directory of Open Access Journals (Sweden)
L. Bressan
2016-01-01
reconstructed sea level (RSL, the background slope (BS and the control function (CF. These functions are examined through a traditional spectral fast Fourier transform (FFT analysis and also through a statistical analysis, showing that they can be characterised by probability distribution functions PDFs such as the Student's t distribution (IS and RSL and the beta distribution (CF. As an example, the method has been applied to data from the tide-gauge station of Siracusa, Italy.
Virtual Systems Pharmacology (ViSP software for mechanistic system-level model simulations
Directory of Open Access Journals (Sweden)
Sergey eErmakov
2014-10-01
Full Text Available Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user’s particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
International Nuclear Information System (INIS)
Giannantonio, Tommaso; Porciani, Cristiano
2010-01-01
We study structure formation in the presence of primordial non-Gaussianity of the local type with parameters f NL and g NL . We show that the distribution of dark-matter halos is naturally described by a multivariate bias scheme where the halo overdensity depends not only on the underlying matter density fluctuation δ but also on the Gaussian part of the primordial gravitational potential φ. This corresponds to a non-local bias scheme in terms of δ only. We derive the coefficients of the bias expansion as a function of the halo mass by applying the peak-background split to common parametrizations for the halo mass function in the non-Gaussian scenario. We then compute the halo power spectrum and halo-matter cross spectrum in the framework of Eulerian perturbation theory up to third order. Comparing our results against N-body simulations, we find that our model accurately describes the numerical data for wave numbers k≤0.1-0.3h Mpc -1 depending on redshift and halo mass. In our multivariate approach, perturbations in the halo counts trace φ on large scales, and this explains why the halo and matter power spectra show different asymptotic trends for k→0. This strongly scale-dependent bias originates from terms at leading order in our expansion. This is different from what happens using the standard univariate local bias where the scale-dependent terms come from badly behaved higher-order corrections. On the other hand, our biasing scheme reduces to the usual local bias on smaller scales, where |φ| is typically much smaller than the density perturbations. We finally discuss the halo bispectrum in the context of multivariate biasing and show that, due to its strong scale and shape dependence, it is a powerful tool for the detection of primordial non-Gaussianity from future galaxy surveys.
International Nuclear Information System (INIS)
Tanaka, H.; Ohno, N.; Tsuji, Y.; Kajita, S.
2010-01-01
We have analyzed the 2D convective motion of coherent structures, which is associated with plasma blobs, under attached and detached plasma conditions of a linear divertor simulator, NAGDIS-II. Data analysis of probes and a fast-imaging camera by spatio-temporal correlation with three decomposition and proper orthogonal decomposition (POD) was carried out to determine the basic properties of coherent structures detached from a bulk plasma column. Under the attached plasma condition, the spatio-temporal correlation with three decomposition based on the probe measurement showed that two types of coherent structures with different sizes detached from the bulk plasma and the azimuthally localized structure radially propagated faster than the larger structure. Under the detached plasma condition, movies taken by the fast-imaging camera clearly showed the dynamics of a 2D spiral structure at peripheral regions of the bulk plasma; this dynamics caused the broadening of the plasma profile. The POD method was used for the data processing of the movies to obtain low-dimensional mode shapes. It was found that the m=1 and m=2 ring-shaped coherent structures were dominant. Comparison between the POD analysis of both the movie and the probe data suggested that the coherent structure could be detached from the bulk plasma mainly associated with the m=2 fluctuation. This phenomena could play an important role in the reduction of the particle and heat flux as well as the plasma recombination processes in plasma detachment (copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Simulation of shallow groundwater levels: Comparison of a data-driven and a conceptual model
Fahle, Marcus; Dietrich, Ottfried; Lischeid, Gunnar
2015-04-01
Despite an abundance of models aimed at simulating shallow groundwater levels, application of such models is often hampered by a lack of appropriate input data. Difficulties especially arise with regard to soil data, which are typically hard to obtain and prone to spatial variability, eventually leading to uncertainties in the model results. Modelling approaches relying entirely on easily measured quantities are therefore an alternative to encourage the applicability of models. We present and compare two models for calculating 1-day-ahead predictions of the groundwater level that are only based on measurements of potential evapotranspiration, precipitation and groundwater levels. The first model is a newly developed conceptual model that is parametrized using the White method (which estimates the actual evapotranspiration on basis of diurnal groundwater fluctuations) and a rainfall-response ratio. Inverted versions of the two latter approaches are then used to calculate the predictions of the groundwater level. Furthermore, as a completely data-driven alternative, a simple feed-forward multilayer perceptron neural network was trained based on the same inputs and outputs. Data of 4 growing periods (April to October) from a study site situated in the Spreewald wetland in North-east Germany were taken to set-up the models and compare their performance. In addition, response surfaces that relate model outputs to combinations of different input variables are used to reveal those aspects in which the two approaches coincide and those in which they differ. Finally, it will be evaluated whether the conceptual approach can be enhanced by extracting knowledge of the neural network. This is done by replacing in the conceptual model the default function that relates groundwater recharge and groundwater level, which is assumed to be linear, by the non-linear function extracted from the neural network.
von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman
Variability across and within individuals is a fundamental property of adult age changes in behavior [20, 21, 24]. Some people seem young for their age, others seem old; shining examples of older individuals who maintained high levels of intellectual functioning well into very old age, such as Johann Wolfgang von Goethe or Sophocles, stand in contrast to individuals whose cognitive resources are depleted by the time they reach later adulthood. A similar contrast exists between different intellectual abilities. For example, if one looks at the speed needed to identify and discriminate between different percepts, one is likely to find monotonic decline after late adolescence and early adulthood.
Sabol, Thomas A.; Springer, Abraham E.
2013-01-01
Seepage erosion and mass failure of emergent sandy deposits along the Colorado River in Grand Canyon National Park, Arizona, are a function of the elevation of groundwater in the sandbar, fluctuations in river stage, the exfiltration of water from the bar face, and the slope of the bar face. In this study, a generalized three-dimensional numerical model was developed to predict the time-varying groundwater level, within the bar face region of a freshly deposited eddy sandbar, as a function of river stage. Model verification from two transient simulations demonstrates the ability of the model to predict groundwater levels within the onshore portion of the sandbar face across a range of conditions. Use of this generalized model is applicable across a range of typical eddy sandbar deposits in diverse settings. The ability to predict the groundwater level at the onshore end of the sandbar face is essential for both physical and numerical modeling efforts focusing on the erosion and mass failure of eddy sandbars downstream of Glen Canyon Dam along the Colorado River.
Level-set simulations of buoyancy-driven motion of single and multiple bubbles
International Nuclear Information System (INIS)
Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi
2015-01-01
Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.
Ranjan, R.; Menon, S.
2018-04-01
The two-level simulation (TLS) method evolves both the large-and the small-scale fields in a two-scale approach and has shown good predictive capabilities in both isotropic and wall-bounded high Reynolds number (Re) turbulent flows in the past. Sensitivity and ability of this modelling approach to predict fundamental features (such as backscatter, counter-gradient turbulent transport, small-scale vorticity, etc.) seen in high Re turbulent flows is assessed here by using two direct numerical simulation (DNS) datasets corresponding to a forced isotropic turbulence at Taylor's microscale-based Reynolds number Reλ ≈ 433 and a fully developed turbulent flow in a periodic channel at friction Reynolds number Reτ ≈ 1000. It is shown that TLS captures the dynamics of local co-/counter-gradient transport and backscatter at the requisite scales of interest. These observations are further confirmed through a posteriori investigation of the flow in a periodic channel at Reτ = 2000. The results reveal that the TLS method can capture both the large- and the small-scale flow physics in a consistent manner, and at a reduced overall cost when compared to the estimated DNS or wall-resolved LES cost.